Jump to content

Nvidia Gamescon Confrence Today - 2080 series to be announced


Mr.Vic20

Recommended Posts

1 hour ago, legend said:

 

And in those cases I don't begrudge anyone for not willing to pay the huge cost. But anyone who loves games and can otherwise swing high-end card costs should be willing to support this.

 

And everyone else should be glad this happening :) 

I am willing to pay the cost for things that provide a tangible benefit for my gaming experience.  What benefits would I get from these cards?  If it results in marginally better shadows, at the cost of significantly lower framerate and resolution -- that's something I can't support.  But, I'll wait for the hands-on reviews to see how it performs on games.

1 hour ago, legend said:

 

Irrelevant. The fact that you should encourage what you want to see more of is a basic principle of all social decision making.

Fact:  Nvidia won't change any of their decisions if I (alone) decide to not buy a video card.  Not one single developer will decide to make (or not make) software decisions based on me buying a video card.  Other people buying cards is completely independent of me buying a video card.  Therefore, my choice to buying a video card has no influence on the future of "ray tracing".  The only rational decision (economically speaking) on whether to upgrade my card is whether the RTX-20 series provides tangible benefits to my gaming experience.

36 minutes ago, legend said:

 

Lets be precise: I'll be confused at the community disappointment if performance on conventional games remains top tier, while providing a major advance in architecture that will yield some improvements now, and ultimately much greater ones.

 

I don't think I'm overestimating anything. The stagnation of raster-based tech is immediately apparent. Both of the last two generations have yielded many people being disappointed at the graphical progress being made. That's because we're running out of room for what we can do with conventional tech. We need to change it up or it's really going to stagnate. 

 

And it's not just raytracing. Having strong DL capabilities on board is going to have ramifications for gaming beyond their up-resing for limited rays (or resolution in general). This tech is moving faster than I would have expected, but with hardware like this, there are some important ways in which DL can be useful for gaming and gaming graphics. For example, by making intentional physically-based animation. (A tech important not just to the resulting actors in the game, but the whole animation pipeline being streamlined)

 

 

"It's not on us" isn't a useful way to frame things. No one is "obligated" to do anything ever, so nothing is ever "on someone" to do. It's a simple question of "does supporting this encourage the future I want to see?"  If you do not think this line of tech is useful, that it doesn't represent a future we should want to see, then we can have that discussion, because we may disagree. But it's simply true that you have to encourage what you want to see more of independent of any "obligation" or who it's "on."

It is not simply true that we should support cards that lean into ray tracing.   If we learn that these cards somehow provide much better performance, than I expect, on "rasterized" games.  Or the ray traced games look significantly better than the SotTR demo indicated.  Or, whatever, we should all go out and buy these cards,

 

However, if a 2080 performs only marginally better than a 2-year old 1080, and costs hundreds more, it doesn't make much sense for anyone to buy one before there is software available that justifies the experience.

Link to comment
Share on other sites

1 hour ago, Reputator said:

 

This statement contains a bunch of assumptions about the future you literally can't make. "Remaining top tier" isn't an argument in favor of buying a new graphics card. I don't care how much it titillates your I/O ports.

 

I don't know it's the future, no. But I'm pretty damn confident staying with pure raster isn't going to lead us anywhere great and it's quite surprising that you think the status quo is okay and headed anywhere but a dead end.

 

It would also be surprising to me if the future of graphics goes somewhere that's not ray-tracing nor raster. In all these years, ray tracing hasn't gone away. There hasn't been a different approach embraced unless you count hybrid approaches which is exactly what Nvidia is doing. At a certain point, the approximation model of rasterization hits a wall.

 

And if the community not willing to support a solid take at something new because there is any uncertainty at all then we have a really long shitty road ahead of us.

 

Quote

No one was disappointed at the performance of the 10 series. Prior to that, we were stuck on 28nm for three generations, so there was some stagnation, albeit out of the hands of any IHV. Aside from you, however, I've not heard anyone lament the stagnation of features though.

 

I'm not even talking about the 10 series alone. I'm talking about the overall trend of graphics. That means PC GPUs and console generations which are over longer intervals (and which these days are closely tied to what PC GPUs can do). I honestly don't know what to tell you if you think people haven't been expressing a sense of diminishing returns. I see it rather frequently.

 

Quote

 

You have a very utopian-like concept for the role of consumers. Be realistic. Few people have the money to throw away on an investment in the future with very few short-term gains. Most of us aren't venture capitalists. We don't "have to" encourage anything if NVIDIA and the developers fail to give us incentives beyond empty promises. If you think otherwise, I have some snake oil you might be interested in.

 

I've previously said that if people can't afford those prices I sympathize with that entirely. And who said anything about "throwing away" the money? My very argument is that there is a long term benefit here that people should be incorporating to their decision making. 

 

Again, you keep appealing to "have to" arguments when I'm very explicitly rejecting that entire line of thinking, so I don't know why you keep returning to it. Ignoring future consequences is, definitionally, myopic.

Link to comment
Share on other sites

2 hours ago, mikechorney said:

I am willing to pay the cost for things that provide a tangible benefit for my gaming experience.  What benefits would I get from these cards?  If it results in marginally better shadows, at the cost of significantly lower framerate and resolution -- that's something I can't support.  But, I'll wait for the hands-on reviews to see how it performs on games.

 

What we get is a function of how successful they are, both in terms of what we'll see possible on these specific cards, and down the road on future tech, to at least some degree.  If the cards don't have much impact, we're probably only going to see a handful of games with the most cursory implementations that they can add as a bullet unless developers are overwhelmingly happy to push it along despite consumers not buying in.

 

Quote

Fact:  Nvidia won't change any of their decisions if I (alone) decide to not buy a video card.  Not one single developer will decide to make (or not make) software decisions based on me buying a video card.  Other people buying cards is completely independent of me buying a video card.  Therefore, my choice to buying a video card has no influence on the future of "ray tracing".  The only rational decision (economically speaking) on whether to upgrade my card is whether the RTX-20 series provides tangible benefits to my gaming experience.

 

By this same reasoning, voting is pointless, because your single vote wont matter. This thinking really worked out well for us in 2016.

 

To be clear, no this scenario is in no way even remotely comparable in *significance* of voting or not voting in 2016. But the reasoning is very much the same and wrong for the same reasons.

 

The only rational decision is whether buying the card and supporting it helps to usher in a future you want. I'm happy to debate that point. In fact, depending on what independent reviews reveal, I could very much change my tune! But pretending that our decisions have no impact on the future or that we shouldn't consider the future effects of what our purchasing decisions cause is a non-starter.

 

Quote

It is not simply true that we should support cards that lean into ray tracing.   If we learn that these cards somehow provide much better performance, than I expect, on "rasterized" games.  Or the ray traced games look significantly better than the SotTR demo indicated.  Or, whatever, we should all go out and buy these cards,

 

However, if a 2080 performs only marginally better than a 2-year old 1080, and costs hundreds more, it doesn't make much sense for anyone to buy one before there is software available that justifies the experience.

 

Let me paint a hypothetical to try and gain grounding on how we can go about discussing this, because I feel like the same things are just being said on either side of this discussion and none of us are making progress.

 

Suppose at release it is as you described: parity performance and the initial ray-tracing and DL-enhanced games provide modest performance boosts. Now for the hypothetical suppose there are two possible futures.

 

In one, only Vic and Stepee buy the cards and no one else in the world does. Because it's a massive failure, Nvidia goes back to making pure raster cards in response.

 

In another future, the cards are a major success, developers continue to work on software that exploits the architecture, and in another year and half, we start seeing cards that even running on this initial 20x line look even better than what devs could do at the start and further down the road we get cards that can do even greater things.

 

Claiming that these are the only possible futures is of course absurd. It could also be that the cards never really enable a lot more and what we see at the start is all we get. But I want to make sure we can at least agree that in this absurd scenario, it would be better if people got on board at the start despite things not being only modest improvements out of the gate for limited games. If we cannot agree on that, then we've got a much bigger bridge to cross.

 

If we can agree on that though, then it means we've reach common ground on a couple sticking points that seem to keep coming up. Those being that

1. encouraging other's behavior is an important element of decision making; and 

2. that long term results are important factors beyond what we'll see immediately out of the gate.

 

And if we can get beyond that, then we can move on to what I think are the far more relevant points. Those being things like "how likely is it that this architectural shift is important to gaming?" "How likely are Nvidia and developers to change strategy if the cards don't sell well?" "What level of current raster performance is an acceptable tradeoff for future gains?" etc.

Link to comment
Share on other sites

 

 

31 minutes ago, legend said:

 

By this same reasoning, voting is pointless, because your single vote wont matter. This thinking really worked out well for us in 2016.

 

To be clear, no this scenario is in no way even remotely comparable in *significance* of voting or not voting in 2016. But the reasoning is very much the same and wrong for the same reasons.

 

The only rational decision is whether buying the card and supporting it helps to usher in a future you want. I'm happy to debate that point. In fact, depending on what independent reviews reveal, I could very much change my tune! But pretending that our decisions have no impact on the future or that we shouldn't consider the future effects of what our purchasing decisions cause is a non-starter.

 

My vote in an election is essentially free, a 2080TI costs $1,200.  That's not a particularly good analogy.

 

31 minutes ago, legend said:

 

Let me paint a hypothetical to try and gain grounding on how we can go about discussing this, because I feel like the same things are just being said on either side of this discussion and none of us are making progress.

 

Suppose at release it is as you described: parity performance and the initial ray-tracing and DL-enhanced games provide modest performance boosts. Now for the hypothetical suppose there are two possible futures.

 

In one, only Vic and Stepee buy the cards and no one else in the world does. Because it's a massive failure, Nvidia goes back to making pure raster cards in response.

 

In another future, the cards are a major success, developers continue to work on software that exploits the architecture, and in another year and half, we start seeing cards that even running on this initial 20x line look even better than what devs could do at the start and further down the road we get cards that can do even greater things.

 

Claiming that these are the only possible futures is of course absurd. It could also be that the cards never really enable a lot more and what we see at the start is all we get. But I want to make sure we can at least agree that in this absurd scenario, it would be better if people got on board at the start despite things not being only modest improvements out of the gate for limited games. If we cannot agree on that, then we've got a much bigger bridge to cross.

 

If we can agree on that though, then it means we've reach common ground on a couple sticking points that seem to keep coming up. Those being that

1. encouraging other's behavior is an important element of decision making; and 

2. that long term results are important factors beyond what we'll see immediately out of the gate.

 

And if we can get beyond that, then we can move on to what I think are the far more relevant points. Those being things like "how likely is it that this architectural shift is important to gaming?" "How likely are Nvidia and developers to change strategy if the cards don't sell well?" "What level of current raster performance is an acceptable tradeoff for future gains?" etc.

1.  My decision  to buy a card (or not buy a card) has no practical effect on encouraging behavior from anyone

2.  Long term results don't mean anything on my short term decision.  I can but a card later, once those benefits become evident

 

I don't know if the architectural decision on GeForce 20-Series cards are a good decision, or a disastrous diversion.  I've been hearing about ray tracing for 30-years (and had a friend playing with it in high school -- and I've been out of high school for a long time).  I remember this from that era:

 

 

The only way I know if these are the right architectural decisions and are worthy of my hard earned $$$, are whether my experiences are better TODAY.  When the software is there, there is a rationale decision to be made about buying hardware.  These aren't consoles where, in the past at least, you could justify the purchase in knowing that there would be 5-7 years of fixed hardware.  Graphics cards are on an 18-month generation upgrade cadence.

 

In a similar way, I actually believe that VR is the future -- but I thank god I haven't bought a headset, because there isn't enough good software to justify a purchase.  It would be sitting in the corner of my office gathering dust.  And when I do buy the hardware, I know that it will be significantly better than I would have otherwise bought.

 

I spend too much money in a frivolous manner on stuff that actually makes a difference to me.  $1,200 is a big number, but not ridiculous, I just spent $500 for a one day outing for the family to ride roller coasters.  But I won't spend money to encourage NVidia to randomly innovate -- unless those innovations bring me tangible gaming benefits.

 

If in a month, these cards are setting the world on fire with "OMG AWESOME" graphics, I'll be first in line to buy one.  But to be honest, I really don't expect that to happen.  

Link to comment
Share on other sites

43 minutes ago, mikechorney said:

 

 

My vote in an election is essentially free, a 2080TI costs $1,200.  That's not a particularly good analogy.

 

No, it's still a perfectly fine analogy because the reasoning you're using is still the same faulty reasoning. "Well my individual contribution doesn't matter on it's own" is bad reasoning. Populations where agents adopt this reasoning do worse than populations who don't. This is a well studied dynamic both within game theory and psychology. The reasoning you described is bad and you should throw it away as fast as possible!

 

Quote

1.  My decision  to buy a card (or not buy a card) has no practical effect on encouraging behavior from anyone

2.  Long term results don't mean anything on my short term decision.  I can but a card later, once those benefits become evident

 

1. I have responded to this above.

 

2. No, this is not a permissible response to the hypothetical. Decisions you, and others, make now, can have long term ramifications. In the hypothetical I gave you, failing to support the card causes a failure in further development, thereby causing a failure of developers delivering the gains you're hoping to find the in future.

 

You realize that by conceding the hypothetical you're not conceding to my entire claim, right? There are lots of further nuances we can discuss, but we must first establish common ground on some fundamental properties, because at this point I'm worried that there a vast gulf between us that will otherwise prevent any further progress in the discussion. I really didn't think this would be a controversial hypothetical.

 

 

Quote

I don't know if the architectural decision on GeForce 20-Series cards are a good decision, or a disastrous diversion.  I've been hearing about ray tracing for 30-years (and had a friend playing with it in high school -- and I've been out of high school for a long time).  I remember this from that era:

 

 

The only way I know if these are the right architectural decisions and are worthy of my hard earned $$$, are whether my experiences are better TODAY.  When the software is there, there is a rationale decision to be made about buying hardware.  These aren't consoles where, in the past at least, you could justify the purchase in knowing that there would be 5-7 years of fixed hardware.  Graphics cards are on an 18-month generation upgrade cadence.

 

In a similar way, I actually believe that VR is the future -- but I thank god I haven't bought a headset, because there isn't enough good software to justify a purchase.  It would be sitting in the corner of my office gathering dust.  And when I do buy the hardware, I know that it will be significantly better than I would have otherwise bought.

 

I spend too much money in a frivolous manner on stuff that actually makes a difference to me.  $1,200 is a big number, but not ridiculous, I just spent $500 for a one day outing for the family to ride roller coasters.  But I won't spend money to encourage NVidia to randomly innovate -- unless those innovations bring me tangible gaming benefits.

 

If in a month, these cards are setting the world on fire with "OMG AWESOME" graphics, I'll be first in line to buy one.  But to be honest, I really don't expect that to happen.  

 

 

Operating under the assumption that these are random frivolous innovations unless you see results immediately is a bad assumption.

Link to comment
Share on other sites

7 hours ago, legend said:

 

No, it's still a perfectly fine analogy because the reasoning you're using is still the same faulty reasoning. "Well my individual contribution doesn't matter on it's own" is bad reasoning. Populations where agents adopt this reasoning do worse than populations who don't. This is a well studied dynamic both within game theory and psychology. The reasoning you described is bad and you should throw it away as fast as possible!

Economic theory suggests people act in their own self interest, and make purchases that increase their utility.  As a classic prisoner's dilemma, game theory suggests that the majority people don't act in the way you suggest.  If I had a way of influencing the collective and somehow convincing a critical mass of people to, as a group, make a different choice, then perhaps it would be rational to spend money in this way.

 

7 hours ago, legend said:

 

2. No, this is not a permissible response to the hypothetical. Decisions you, and others, make now, can have long term ramifications. In the hypothetical I gave you, failing to support the card causes a failure in further development, thereby causing a failure of developers delivering the gains you're hoping to find the in future.

 

You realize that by conceding the hypothetical you're not conceding to my entire claim, right? There are lots of further nuances we can discuss, but we must first establish common ground on some fundamental properties, because at this point I'm worried that there a vast gulf between us that will otherwise prevent any further progress in the discussion. I really didn't think this would be a controversial hypothetical.

Decisions the collective makes, can influence the future.  Each individuals decision has an extremely small marginal impact, that makes them, individually, virtually irrelevant.  Given that my choices don't influence the collective, I recognize that they have a negligible impact on future development gains. Given the high marginal cost of making this purchase, it is in my rational best interest to make a choice based on my expected short-term marginal gains.  If I somehow could collude with several million other people to make an irrational decision, that would be different.  But, I don't know how to do that.

 

7 hours ago, legend said:

 

Operating under the assumption that these are random frivolous innovations unless you see results immediately is a bad assumption.

Operating under the assumption that ray tracing is the panacea for game graphics before it can be demonstrated, in real games, is a reach.

You obviously have a lot more confidence in ray tracing's viability than I do. 

 

Does ray tracing have the potential to produce better results than rasterization?  Yes, but I don't know how significant they are in real world games.

When can those better results be done without unacceptable reductions in frame rate and resolution?  I have no idea.  Maybe today.  Maybe 3 years from now. Maybe 50 years from now.  Maybe never.

Link to comment
Share on other sites

1 hour ago, mikechorney said:

Economic theory suggests people act in their own self interest, and make purchases that increase their utility.  As a classic prisoner's dilemma, game theory suggests that the majority people don't act in the way you suggest. 


If I had a way of influencing the collective and somehow convincing a critical mass of people to, as a group, make a different choice, then perhaps it would be rational to spend money in this way.

 

This isn't analogous prisoner's dilemma. The point I'm making is much more analogous to the public goods game.

 

Second of all, in prisoner's dilemma, defecting is only a Nash equilibrium in finite games with a known horizon between two people. In iterated games (and especially in a population), the better strategy (and a Nash equilibrium) is tit-for-tat, which amounts to the concept of cooperating, but punishing those who defect.

 

Third of all, a similar dynamic exists in the more analogous public goods games. If it's played in isolation, then to everyone's detriment the Nash equilibrium is indeed to not cooperate. But once you introduce other forms of social structures that disappears. For example, punishment systems or enforcing agencies become the rational decision for each agent to employ. We have at our disposal the means to communicate socially (and in more extreme cases, punish), which is the mechanism that causes rational cooperation in the public goods game. 

 

Of course, that it's a rational decision to employ those methods doesn't mean people are rational actors. Indeed,  if you're stuck with irrational actors, you can only do so much. For example, you could have in your population an actor who unreasonably simply doesn't respond to any social structures, punishments, etc. And humanity does have to deal with the fact that some of our population is exactly that. People who are so stuck on their simple reasoning for whom nothing you do will budge them. In those cases the outcome is capped by how many rational actors there are.

 

I have bothered to argue this point on this board because I tend to think higher of the people here.

 

You seem to be taking the position that we don't have these social mechanisms to collaborate. But this seems clearly wrong to me. Society is more able to collaborate over large distance than we ever were before. Even gamers have been able to band together to cause change within the gaming industry. Fuck, many of us throw money at kickstarters which naturally comes with a high-level of uncertainty regarding the product we'll actually get.

 

That we can see so many gamers actually accomplish what you are are claiming can't be done, and then turn their nose to what Nvidia is doing while rationalizing that our decisions don't actually make a difference blows my mind.

 

 

Quote

Operating under the assumption that ray tracing is the panacea for game graphics before it can be demonstrated, in real games, is a reach.

You obviously have a lot more confidence in ray tracing's viability than I do. 

 

Does ray tracing have the potential to produce better results than rasterization?  Yes, but I don't know how significant they are in real world games.

When can those better results be done without unacceptable reductions in frame rate and resolution?  I have no idea.  Maybe today.  Maybe 3 years from now. Maybe 50 years from now.  Maybe never.

 

Yes, and *this* is a valid argument against my position. I *could* be wrong about it. I don't think I am, no, but this is what I wish we were debating instead of getting stuck on whether we can make a difference with our purchasing decisions or whether long term consequences are actually something real and important to rational decision making.

 

 

It's confusing to me why you're coming from the position that ray tracing isn't the future of games. (I also think the DL capabilities are important too, but I recognize I'm a bit more biased for other reasons to want that.) Can we start there? Do you really think it's at all likely that ray tracing will *never* be in gaming's future?

 

Link to comment
Share on other sites

My two cents on what Nvidia has done, is that this is the right step even if its, in the short term, the wrong step. I'm glad they are taking on Ray Tracing, it would be even better if it was open source, but the truth is that special hardware is currently required to make this happen as it can't be brute forced via more general work processing cores, so it would be kinda weird to expect Nvidia to spend the money and then turn the hardware design over to AMD and intel. So its costly, and its proprietary and we consumes aren't excited about that. But flash forward a year or two and what we will have is a legitimate path to this tech, more matured, and a lower prices. That's a worthy goal! 

 

don't get me wrong, vote with your wallets by all means if you think Nvidia is screwing the consumer here, but I think they are simply being A.) Ambitious and B.) Nvida, which is a lot like Hoolie... :p

  • Like 1
Link to comment
Share on other sites

25 minutes ago, legend said:

 

This isn't analogous prisoner's dilemma. The point I'm making is much more analogous to the public goods game.

 

Second of all, in prisoner's dilemma, defecting is only a Nash equilibrium in finite games with a known horizon between two people. In iterated games (and especially in a population), the better strategy (and a Nash equilibrium) is tit-for-tat, which amounts to the concept of cooperating, but punishing those who defect.

 

Third of all, a similar dynamic exists in the more analogous public goods games. If it's played in isolation, then to everyone's detriment the Nash equilibrium is indeed to not cooperate. But once you introduce other forms of social structures that disappears. For example, punishment systems or enforcing agencies become the rational decision for each agent to employ. We have at our disposal the means to communicate socially (and in more extreme cases, punish), which is the mechanism that causes rational cooperation in the public goods game. 

 

Of course, that it's a rational decision to employ those methods doesn't mean people are rational actors. Indeed,  if you're stuck with irrational actors, you can only do so much. For example, you could have in your population an actor who unreasonably simply doesn't respond to any social structures, punishments, etc. And humanity does have to deal with the fact that some of our population is exactly that. People who are so stuck on their simple reasoning for whom nothing you do will budge them. In those cases the outcome is capped by how many rational actors there are.

 

I have bothered to argue this point on this board because I tend to think higher of the people here.

 

You seem to be taking the position that we don't have these social mechanisms to collaborate. But this seems clearly wrong to me. Society is more able to collaborate over large distance than we ever were before. Even gamers have been able to band together to cause change within the gaming industry. Fuck, many of us throw money at kickstarters which naturally comes with a high-level of uncertainty regarding the product we'll actually get.

 

That we can see so many gamers actually accomplish what you are are claiming can't be done, and then turn their nose to what Nvidia is doing while rationalizing that our decisions don't actually make a difference blows my mind.

 

The public goods game and the prisoner's dilemma are actually related.  I don't believe the decision to buy a 20-series card is an "iterated game".  And I also don't believe that the majority of gamers are striving for (or believe in) the "greater good" of ray tracing.

 

Regardless, I don't believe there are external reward or punishment systems, here.  If this was a giant Kickstarter, with a commitment of millions of buyers required to sign up (and essentially collude), then I would agree with you.  No one is "banding together" here.

 

25 minutes ago, legend said:

Yes, and *this* is a valid argument against my position. I *could* be wrong about it. I don't think I am, no, but this is what I wish we were debating instead of getting stuck on whether we can make a difference with our purchasing decisions or whether long term consequences are actually something real and important to rational decision making.

 

 

It's confusing to me why you're coming from the position that ray tracing isn't the future of games. (I also think the DL capabilities are important too, but I recognize I'm a bit more biased for other reasons to want that.) Can we start there? Do you really think it's at all likely that ray tracing will *never* be in gaming's future?

 

Ray tracing is clearly capable of higher IQ than current techniques.  It is, however, much more computationally expensive.  I don't know where the "break-point" in technology that makes it worthwhile.  Has it happened already with the GeForce 20-series cards?  Or, is it even possible?  I don't know.  I would have to defer to hardware/software engineers who understand the intricacies of both types of rendering to speak to that.

Link to comment
Share on other sites

2 hours ago, mikechorney said:

 

The public goods game and the prisoner's dilemma are actually related.  I don't believe the decision to buy a 20-series card is an "iterated game".  And I also don't believe that the majority of gamers are striving for (or believe in) the "greater good" of ray tracing.

 

They're related in the same way any number of different game theory games are related :p They are still different games with their own nuances. PD is not a good analogy for this situation because you can't really apply any insights about its dynamics to the discussion at hand because of the differences. PG is closer.

 

Life is mostly an iterated game. Very few scenarios in life are actually well modeled by one-off games. Psychological modeling work has had to quickly abandon one off-games as models for this very reason.

 

Gamers may very well not see ray tracing as the future of gaming that we should usher in. Hence why I'm speaking out about it!

 

 

Quote

Regardless, I don't believe there are external reward or punishment systems, here.  If this was a giant Kickstarter, with a commitment of millions of buyers required to sign up (and essentially collude), then I would agree with you.  No one is "banding together" here.

 

There are a slew of social dynamics at play! Punishment is only one way that's easy to demonstrate. We have tons of other tools at our disposal that cause rational cooperation and humanity is actually receptive to them on many occasions!

 

Quote

Ray tracing is clearly capable of higher IQ than current techniques.  It is, however, much more computationally expensive.  I don't know where the "break-point" in technology that makes it worthwhile.  Has it happened already with the GeForce 20-series cards?  Or, is it even possible?  I don't know.  I would have to defer to hardware/software engineers who understand the intricacies of both types of rendering to speak to that.

 

Good, I think this means the main question is really when can this change happen, and is this series actually the start of it?

 

My estimate, which is provisional until we have more independent analysis, is if this series if well adopted it will result developers being able to get some worthwhile improvements out of it, but probably not as much at the very first day one release.

 

The feature I was by far most excited by back when UE4 was first being announced was their dynamic GI using ray casting through sparse voxel oct trees (if we had our board history I could probably pull it up for you :p ). Eventually, they canned the feature though because there just wasn't enough processing power to do it. Clearly they didn't think it was a total pipe dream since they initially had it as a main selling point at announcement of UE4. With dedicated hardware like this, better GI of that sort and others are now far more feasible.

 

I think it's telling that Nvidia didn't just do their own internal development, but got a lot of cooperation from MS with DX and Epic with UE to jump on this hardware and start supporting it. I have some insight being a computer scientist that causes me to be encouraged, but I am not a graphics expert. But all signs are pointing positive that the graphics experts do think this is a part we can start.

 

And the features we could get out of the gate, like qualitatively better GI, shadows, etc. seem far more significant to me than what we had with the 10 series where for most people it was "look, now I can play this game at 90Fps maxed out instead of 70!" That's part of my bafflement in this response. Embracing this tech can yield far more appreciable changes in our gaming experience than the last gen and can pave the way for even greater changes. Yet it's being met with less interest than the profoundly boring 10 series revision.

Link to comment
Share on other sites

Wasn't Nvidia the ones who brought Hardware T&L to the masses way back in the days of the Geforce 256?  Nvidia being first to market with their raytracing on board doesn't bother me.  Unless AMD has been caught with their pants down, they will have some sort of hardware raytracer in the pipe too.  As it becomes more efficient and optimized, performance will increase, and I'm sure with these new cards, the rasterization approach will have performance benefits over the current generation of cards.  I look forward to seeing the benchmarks, even though I'll stick with my 1070 for a bit longer.

Link to comment
Share on other sites

https://www.pcworld.com/article/3299456/components-graphics/custom-geforce-rtx-2080-and-geforce-rtx-2080-ti-graphics-card-preorders.html

 

https://www.tomshardware.com/news/wait-to-buy-nvidia-geforce-rtx-gpus,37673.html

 

https://www.extremetech.com/computing/275859-dont-buy-the-ray-traced-hype-around-nvidias-rtx-2080-family

 

Major publications all over are encouraging people to wait for reviews to see how these cards perform under current workloads (you know, despite the risks to "social decision making" or whatever). I'm glad there's a voice of reason spreading after the hype train. Moral of the story: don't believe smoke and mirrors until you see strong, convincing evidence.

Link to comment
Share on other sites

16 minutes ago, Reputator said:

https://www.pcworld.com/article/3299456/components-graphics/custom-geforce-rtx-2080-and-geforce-rtx-2080-ti-graphics-card-preorders.html

 

https://www.tomshardware.com/news/wait-to-buy-nvidia-geforce-rtx-gpus,37673.html

 

https://www.extremetech.com/computing/275859-dont-buy-the-ray-traced-hype-around-nvidias-rtx-2080-family

 

Major publications all over are encouraging people to wait for reviews to see how these cards perform under current workloads (you know, despite the risks to "social decision making" or whatever). I'm glad there's a voice of reason spreading after the hype train. Moral of the story: don't believe smoke and mirrors until you see strong, convincing evidence.

100% agreed, but still excited about RTX! 

Link to comment
Share on other sites

26 minutes ago, Reputator said:

https://www.pcworld.com/article/3299456/components-graphics/custom-geforce-rtx-2080-and-geforce-rtx-2080-ti-graphics-card-preorders.html

 

https://www.tomshardware.com/news/wait-to-buy-nvidia-geforce-rtx-gpus,37673.html

 

https://www.extremetech.com/computing/275859-dont-buy-the-ray-traced-hype-around-nvidias-rtx-2080-family

 

Major publications all over are encouraging people to wait for reviews to see how these cards perform under current workloads (you know, despite the risks to "social decision making" or whatever). I'm glad there's a voice of reason spreading after the hype train. Moral of the story: don't believe smoke and mirrors until you see strong, convincing evidence.

 

I'm completely supportive of waiting for independent reviews. If it tanks in a lot of important ways then we have problem! Until then though, everything points in the direction I want to see it pointing.

Link to comment
Share on other sites

2 minutes ago, Mr.Vic20 said:

 

This is much better than I was hoping! Of course this is Nvidia's testing right? In which case it's a bit suspect. But if these numbers mostly hold out with more independent reviews, all the better!

Link to comment
Share on other sites

1 minute ago, legend said:

 

This is much better than I was hoping! Of course this is Nvidia's testing right? In which case it's a bit suspect. But if these numbers mostly hold out with more independent reviews, all the better!

Nvidia suspect?! YOU MONSTER! :notsure: (but yeah, we really need independent verification here! Also, I'm even more interested in the DLSS/tensor core component of all of this now!)

Link to comment
Share on other sites

12 minutes ago, Mr.Vic20 said:

 

"DLSS takes advantage of our tensor cores’ ability to use AI. In this case it’s used to develop a neural network that teaches itself how to render a game. It smooths the edges of rendered objects and increases performance."

 

This sounds similar in premise to AMD's SenseMI technology, doing adaptive optimization and speculative execution, which is definitely something new and interesting in the world of GPUs. Granted, narrowed in focus to the task of anti-aliasing, but hopefully the concept opens up to more rendering tasks in the future. It could have implications for all types of rendering, raytracing or otherwise.

Link to comment
Share on other sites

2 minutes ago, Reputator said:

 

"DLSS takes advantage of our tensor cores’ ability to use AI. In this case it’s used to develop a neural network that teaches itself how to render a game. It smooths the edges of rendered objects and increases performance."

 

This sounds similar in premise to AMD's SenseMI technology, doing adaptive optimization and speculative execution, which is definitely something new and interesting in the world of GPUs. Granted, narrowed in focus to the task of anti-aliasing, but hopefully the concept opens up to more rendering tasks in the future. It could have implications for all types of rendering, raytracing or otherwise.

I hope it works as advertised! 

Link to comment
Share on other sites

3 minutes ago, Reputator said:

 

"DLSS takes advantage of our tensor cores’ ability to use AI. In this case it’s used to develop a neural network that teaches itself how to render a game. It smooths the edges of rendered objects and increases performance."

 

This sounds similar in premise to AMD's SenseMI technology, doing adaptive optimization and speculative execution, which is definitely something new and interesting in the world of GPUs. Granted, narrowed in focus to the task of anti-aliasing, but hopefully the concept opens up to more rendering tasks in the future. It could have implications for all types of rendering, raytracing or otherwise.

 

The other DL based tech I mentioned earlier that this hardware could see making its way out of research and into actual games is "animation" systems where the movements are governed by intentional behavior that adapt to the environment. It's tech that both (1) makes the development pipeline much easier and (2) makes for more compelling correctly reactive results. 

 

Here's a blog on the tech:

https://xbpeng.github.io/projects/DeepMimic/index.html

 

And a summary video:

 

Link to comment
Share on other sites

2 hours ago, Mr.Vic20 said:

I really hope those are real.  You know what they say about a fool and his money...  :)

I really would like a new video card...  And these might count as "tangible improvements"?

Link to comment
Share on other sites

Nvidia Shares RTX 2080 Test Results: 35 - 125% Faster Than GTX 1080

"The only way for performance to increase using DLSS is if Nvidia’s baseline was established with some form of anti-aliasing applied at 3840x2160. By turning AA off and using DLSS instead, the company achieves similar image quality, but benefits greatly from hardware acceleration to improve performance. Thus, in those six games, Nvidia demonstrates one big boost over Pascal from undisclosed Turing architectural enhancements, and a second speed-up from turning AA off and DLSS on. Shadow of the Tomb Raider, for instance, appears to get a ~35 percent boost from Turing's tweaks, plus another ~50 percent after switching from AA to DLSS.

In the other four games, improvements to the Turing architecture are wholly responsible for gains ranging between ~40 percent and ~60 percent. Without question, those are hand-picked results. We’re not expecting to average 50%-higher frame rates across our benchmark suite. However, enthusiasts who previously speculated that Turing wouldn’t be much faster than Pascal due to its relatively lower CUDA core count weren’t taking underlying architecture into account. There’s more going on under the hood than the specification sheet suggests."

Link to comment
Share on other sites

Man, if these results are even close to reality, and looking at the difference in specs between the 2080 and the 2080ti, the ti is going to be an absolute beast. Ray tracing aside, it looks like real 4k/60 for current games may finally be here! 

Link to comment
Share on other sites

2 hours ago, mikechorney said:

I really hope those are real.  You know what they say about a fool and his money...  :)

I really would like a new video card...  And these might count as "tangible improvements"?

 

These results panning out (and not being BS scores) with everything else would still only be at a "might?"

 

giphy.gif

 

 

:p 

  • Haha 1
Link to comment
Share on other sites

I'm thinking a bit more about what having this much tensor core power on a gpu means for gaming. I came up with a bizarre idea that I think would be a lot of fun for modders. One of the current trends in DL is pixel to pixel networks. This is actually what their super scaling and AA tech is.  But one of the more fun things you can do is train artistic styles to turn regular images into something stylized. What this means for modders is they could train an art style and then apply it to a game to get a very different aesthetic. 

Link to comment
Share on other sites

38 minutes ago, legend said:

I'm thinking a bit more about what having this much tensor core power on a gpu means for gaming. I came up with a bizarre idea that I think would be a lot of fun for modders. One of the current trends in DL is pixel to pixel networks. This is actually what their super scaling and AA tech is.  But one of the more fun things you can do is train artistic styles to turn regular images into something stylized. What this means for modders is they could train an art style and then apply it to a game to get a very different aesthetic. 

Rendering sudden acid trips from "realistic" worlds to Cartoon worlds will be a snap! Finally, we have the technology to make "Cool World" the game right! :p

Link to comment
Share on other sites

1 minute ago, Mr.Vic20 said:

Rendering sudden acid trips from "realistic" worlds to Cartoon worlds will be a snap! Finally, we have the technology to make "Cool World" the game right! :p

 

Given the potential, a lot of people are going to get real weird with it! :p 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...