Jump to content

legend

Members
  • Posts

    29,605
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by legend

  1. These results panning out (and not being BS scores) with everything else would still only be at a "might?"
  2. The other DL based tech I mentioned earlier that this hardware could see making its way out of research and into actual games is "animation" systems where the movements are governed by intentional behavior that adapt to the environment. It's tech that both (1) makes the development pipeline much easier and (2) makes for more compelling correctly reactive results. Here's a blog on the tech: https://xbpeng.github.io/projects/DeepMimic/index.html And a summary video:
  3. This is much better than I was hoping! Of course this is Nvidia's testing right? In which case it's a bit suspect. But if these numbers mostly hold out with more independent reviews, all the better!
  4. I'm completely supportive of waiting for independent reviews. If it tanks in a lot of important ways then we have problem! Until then though, everything points in the direction I want to see it pointing.
  5. I'm not especially keen on hoping it progresses without consumer support, but It is possible developers and Nvidia will try and force it through regardless. I would certainly be more comfortable if that ends up the case! I'm also still confused that more people are not excited about this, but were excited by the boring gains the 10 series brought.
  6. They're related in the same way any number of different game theory games are related They are still different games with their own nuances. PD is not a good analogy for this situation because you can't really apply any insights about its dynamics to the discussion at hand because of the differences. PG is closer. Life is mostly an iterated game. Very few scenarios in life are actually well modeled by one-off games. Psychological modeling work has had to quickly abandon one off-games as models for this very reason. Gamers may very well not see ray tracing as the future of gaming that we should usher in. Hence why I'm speaking out about it! There are a slew of social dynamics at play! Punishment is only one way that's easy to demonstrate. We have tons of other tools at our disposal that cause rational cooperation and humanity is actually receptive to them on many occasions! Good, I think this means the main question is really when can this change happen, and is this series actually the start of it? My estimate, which is provisional until we have more independent analysis, is if this series if well adopted it will result developers being able to get some worthwhile improvements out of it, but probably not as much at the very first day one release. The feature I was by far most excited by back when UE4 was first being announced was their dynamic GI using ray casting through sparse voxel oct trees (if we had our board history I could probably pull it up for you ). Eventually, they canned the feature though because there just wasn't enough processing power to do it. Clearly they didn't think it was a total pipe dream since they initially had it as a main selling point at announcement of UE4. With dedicated hardware like this, better GI of that sort and others are now far more feasible. I think it's telling that Nvidia didn't just do their own internal development, but got a lot of cooperation from MS with DX and Epic with UE to jump on this hardware and start supporting it. I have some insight being a computer scientist that causes me to be encouraged, but I am not a graphics expert. But all signs are pointing positive that the graphics experts do think this is a part we can start. And the features we could get out of the gate, like qualitatively better GI, shadows, etc. seem far more significant to me than what we had with the 10 series where for most people it was "look, now I can play this game at 90Fps maxed out instead of 70!" That's part of my bafflement in this response. Embracing this tech can yield far more appreciable changes in our gaming experience than the last gen and can pave the way for even greater changes. Yet it's being met with less interest than the profoundly boring 10 series revision.
  7. This isn't analogous prisoner's dilemma. The point I'm making is much more analogous to the public goods game. Second of all, in prisoner's dilemma, defecting is only a Nash equilibrium in finite games with a known horizon between two people. In iterated games (and especially in a population), the better strategy (and a Nash equilibrium) is tit-for-tat, which amounts to the concept of cooperating, but punishing those who defect. Third of all, a similar dynamic exists in the more analogous public goods games. If it's played in isolation, then to everyone's detriment the Nash equilibrium is indeed to not cooperate. But once you introduce other forms of social structures that disappears. For example, punishment systems or enforcing agencies become the rational decision for each agent to employ. We have at our disposal the means to communicate socially (and in more extreme cases, punish), which is the mechanism that causes rational cooperation in the public goods game. Of course, that it's a rational decision to employ those methods doesn't mean people are rational actors. Indeed, if you're stuck with irrational actors, you can only do so much. For example, you could have in your population an actor who unreasonably simply doesn't respond to any social structures, punishments, etc. And humanity does have to deal with the fact that some of our population is exactly that. People who are so stuck on their simple reasoning for whom nothing you do will budge them. In those cases the outcome is capped by how many rational actors there are. I have bothered to argue this point on this board because I tend to think higher of the people here. You seem to be taking the position that we don't have these social mechanisms to collaborate. But this seems clearly wrong to me. Society is more able to collaborate over large distance than we ever were before. Even gamers have been able to band together to cause change within the gaming industry. Fuck, many of us throw money at kickstarters which naturally comes with a high-level of uncertainty regarding the product we'll actually get. That we can see so many gamers actually accomplish what you are are claiming can't be done, and then turn their nose to what Nvidia is doing while rationalizing that our decisions don't actually make a difference blows my mind. Yes, and *this* is a valid argument against my position. I *could* be wrong about it. I don't think I am, no, but this is what I wish we were debating instead of getting stuck on whether we can make a difference with our purchasing decisions or whether long term consequences are actually something real and important to rational decision making. It's confusing to me why you're coming from the position that ray tracing isn't the future of games. (I also think the DL capabilities are important too, but I recognize I'm a bit more biased for other reasons to want that.) Can we start there? Do you really think it's at all likely that ray tracing will *never* be in gaming's future?
  8. No, it's still a perfectly fine analogy because the reasoning you're using is still the same faulty reasoning. "Well my individual contribution doesn't matter on it's own" is bad reasoning. Populations where agents adopt this reasoning do worse than populations who don't. This is a well studied dynamic both within game theory and psychology. The reasoning you described is bad and you should throw it away as fast as possible! 1. I have responded to this above. 2. No, this is not a permissible response to the hypothetical. Decisions you, and others, make now, can have long term ramifications. In the hypothetical I gave you, failing to support the card causes a failure in further development, thereby causing a failure of developers delivering the gains you're hoping to find the in future. You realize that by conceding the hypothetical you're not conceding to my entire claim, right? There are lots of further nuances we can discuss, but we must first establish common ground on some fundamental properties, because at this point I'm worried that there a vast gulf between us that will otherwise prevent any further progress in the discussion. I really didn't think this would be a controversial hypothetical. Operating under the assumption that these are random frivolous innovations unless you see results immediately is a bad assumption.
  9. What we get is a function of how successful they are, both in terms of what we'll see possible on these specific cards, and down the road on future tech, to at least some degree. If the cards don't have much impact, we're probably only going to see a handful of games with the most cursory implementations that they can add as a bullet unless developers are overwhelmingly happy to push it along despite consumers not buying in. By this same reasoning, voting is pointless, because your single vote wont matter. This thinking really worked out well for us in 2016. To be clear, no this scenario is in no way even remotely comparable in *significance* of voting or not voting in 2016. But the reasoning is very much the same and wrong for the same reasons. The only rational decision is whether buying the card and supporting it helps to usher in a future you want. I'm happy to debate that point. In fact, depending on what independent reviews reveal, I could very much change my tune! But pretending that our decisions have no impact on the future or that we shouldn't consider the future effects of what our purchasing decisions cause is a non-starter. Let me paint a hypothetical to try and gain grounding on how we can go about discussing this, because I feel like the same things are just being said on either side of this discussion and none of us are making progress. Suppose at release it is as you described: parity performance and the initial ray-tracing and DL-enhanced games provide modest performance boosts. Now for the hypothetical suppose there are two possible futures. In one, only Vic and Stepee buy the cards and no one else in the world does. Because it's a massive failure, Nvidia goes back to making pure raster cards in response. In another future, the cards are a major success, developers continue to work on software that exploits the architecture, and in another year and half, we start seeing cards that even running on this initial 20x line look even better than what devs could do at the start and further down the road we get cards that can do even greater things. Claiming that these are the only possible futures is of course absurd. It could also be that the cards never really enable a lot more and what we see at the start is all we get. But I want to make sure we can at least agree that in this absurd scenario, it would be better if people got on board at the start despite things not being only modest improvements out of the gate for limited games. If we cannot agree on that, then we've got a much bigger bridge to cross. If we can agree on that though, then it means we've reach common ground on a couple sticking points that seem to keep coming up. Those being that 1. encouraging other's behavior is an important element of decision making; and 2. that long term results are important factors beyond what we'll see immediately out of the gate. And if we can get beyond that, then we can move on to what I think are the far more relevant points. Those being things like "how likely is it that this architectural shift is important to gaming?" "How likely are Nvidia and developers to change strategy if the cards don't sell well?" "What level of current raster performance is an acceptable tradeoff for future gains?" etc.
  10. I don't know it's the future, no. But I'm pretty damn confident staying with pure raster isn't going to lead us anywhere great and it's quite surprising that you think the status quo is okay and headed anywhere but a dead end. It would also be surprising to me if the future of graphics goes somewhere that's not ray-tracing nor raster. In all these years, ray tracing hasn't gone away. There hasn't been a different approach embraced unless you count hybrid approaches which is exactly what Nvidia is doing. At a certain point, the approximation model of rasterization hits a wall. And if the community not willing to support a solid take at something new because there is any uncertainty at all then we have a really long shitty road ahead of us. I'm not even talking about the 10 series alone. I'm talking about the overall trend of graphics. That means PC GPUs and console generations which are over longer intervals (and which these days are closely tied to what PC GPUs can do). I honestly don't know what to tell you if you think people haven't been expressing a sense of diminishing returns. I see it rather frequently. I've previously said that if people can't afford those prices I sympathize with that entirely. And who said anything about "throwing away" the money? My very argument is that there is a long term benefit here that people should be incorporating to their decision making. Again, you keep appealing to "have to" arguments when I'm very explicitly rejecting that entire line of thinking, so I don't know why you keep returning to it. Ignoring future consequences is, definitionally, myopic.
  11. Lets be precise: I'll be confused at the community disappointment if performance on conventional games remains top tier, while providing a major advance in architecture that will yield some improvements now, and ultimately much greater ones. I don't think I'm overestimating anything. The stagnation of raster-based tech is immediately apparent. Both of the last two generations have yielded many people being disappointed at the graphical progress being made. That's because we're running out of room for what we can do with conventional tech. We need to change it up or it's really going to stagnate. And it's not just raytracing. Having strong DL capabilities on board is going to have ramifications for gaming beyond their up-resing for limited rays (or resolution in general). This tech is moving faster than I would have expected, but with hardware like this, there are some important ways in which DL can be useful for gaming and gaming graphics. For example, by making intentional physically-based animation. (A tech important not just to the resulting actors in the game, but the whole animation pipeline being streamlined) "It's not on us" isn't a useful way to frame things. No one is "obligated" to do anything ever, so nothing is ever "on someone" to do. It's a simple question of "does supporting this encourage the future I want to see?" If you do not think this line of tech is useful, that it doesn't represent a future we should want to see, then we can have that discussion, because we may disagree. But it's simply true that you have to encourage what you want to see more of independent of any "obligation" or who it's "on."
  12. I did see that post and thought I had addressed it, along with others. I'll will try to clarify though. "Worse performance for the money" is an evaluation based purely on what these cards will do for games the moment they are released. I'm quite deliberately and explicitly making an argument that part of the reason we should get on board beyond the immediate value and and extra bonuses will see in near term is to (1) encourage this hardware development track further; and (2) encourage software developers to work out the kinks by providing an audience for them to develop to. Indeed, I have also explicitly stated that a more traditional raster-focused card, as you've suggested as an alternative, would be far more disappointing to me. As far as a trade off, I would be more sympathetic to the argument if these cards preform substantially worse than the 10x line. In fact, if 3rd party reviews reveal that the RTX really takes a meaningful hit compared to the 10x line, I'll be more understanding of people having further hesitation. Depending on the hit, I might wait as well! If they end up still being top of the line raster cards though, I will remain puzzled at any community disappointment. I don't think we can ask for better than maintaining existing parity while introducing a whole new tech. Hitting that parity at all is pretty great.
  13. I'm not playing semantics, I'm trying to understand the root of where we disagree. If by "price gouging" you purely mean "expensive" then sure, I agree with you! If you mean it's unjustly fair to ask that much, then I do disagree with you and want to know why you think that. This has nothing to do with an "ideal." I'm not using "principle" to mean aspiring to an atheistic ideal. I mean its a fundamental aspect of the mathematical machinery of decision making in populations of agents that affect each other. It would be a categorical mistake to pretend that this dynamic doesn't exist. I wasn't ignoring anything. If you think you provided a salient point that I have not read nor considered, please point it out.
  14. Irrelevant. The fact that you should encourage what you want to see more of is a basic principle of all social decision making.
  15. It's expensive, yes. And that price can hurt, yes which is why I can understand if someone just doesn't feel comfortable spending that much. Why is it price gouging though?
  16. There's gradations to this. If you're already on something like a 1080Ti I understand not wanting to drop cash on it all over again. What I don't understand is a lack of excitement for these cards. What are people wanting Nvidia to do otherwise? Making big, but important, changes like this isn't going to be cheap but I can't think of anything better to encourage. A regular upgrade to a standard raster card at the same price range as the last set is not exciting and I'd be quite disappointed to continue to be stuck in that zone. But if we don't want to be stuck there, we have to put our money on the future we want to see.
  17. And in those cases I don't begrudge anyone for not willing to pay the huge cost. But anyone who loves games and can otherwise swing high-end card costs should be willing to support this. And everyone else should be glad this happening
  18. I've been burned too many times. I will wait a bit longer, but if he pays a large fine and serves time, I will be quite pleased.
  19. Yes, absolutely. We've been on a run of diminishing returns for graphics for some time now. The only way out is to break free of old standards. Nvidia's complete embracing of a combination of raster + ray trace + deep learning is a very promising direction to take. No, it won't be perfect out of the gate, but you're still getting top of the line cards and as a community we ought to be encouraging this direction for our hobby.
  20. I suppose they could maybe then make a case for the time being for sticking with Gsync for monitors since there is no standard, while supporting 2.1's VRR for TVs. Still would be ridiculous though because requiring such special hardware is a terrible plan and I really wish they would do away with it.
  21. It's perfectly fine for Gsync to coexist with 2.1 VRR, but it seems completely pointless for it to do so if they have 2.1 VRR, so I'm more worried about them exclusively supporting Gsync for VRR. As we found in other discussion, you don't need to be fully 2.1 compliant to support 2.1 VRR, so lack of full 2.1 support is at least not a blocker for that. Although if you wanted other 2.1 features, then yeah, that might be an issue.
  22. Nvidia stubbornly sticking with Gsync would indeed be awful.
  23. Beyond the silliness of the company's hype machine, I think we should also praise the whole company. They ultimately are deciding to make this large business decision to push the market forward. It comes down to this: if you want the market to embrace this change, then we should be okay with paying the cost of getting us there. Here's a world that would suck IMO: the market doesn't embrace the new cards and so Nvidia goes back to only pushing on raster and it takes far longer to get to this next world of architecture both because Nvidia has stopped focusing on it and because developers have no audience to justify working on the software end. All things considered this is a commendable job to maintain top-end parity on their top end cards for raster, and introduce a whole lot more. I don't think we can really ask for better than that and so if we want to see more of this, we should reward the company for trying to do it. This is an occasion where Vic's compulsion to buy the newest greatest hardware really would be better if adopted by more people
  24. This is my take as well and frankly, I'm okay with it because where they have delivered they really have. They're doing an enormous shift in architecture and developers are just now starting to use it. It's not going to super high performance on the new stuff immediately. As long as it's not a losing proposition in standard pure raster methods (meaning slower than existing GTX cards), there's really not a trade off. Maybe if AMD was able to compete and was pushing out even better raster cards there'd be a trade off to consider. But there's not. It's still going to be the fastest raster card on the market and introduces a whole new world of rendering with wildly a different architecture. I think we *should* be rewarding Nvidia for this. Despite being king, they haven't rested on their laurels. They've put in the work to start the next generation of graphics and are pulling it off. People shrugging off what they've done is odd to me.
×
×
  • Create New...