Jump to content

legend

Members
  • Posts

    29,567
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by legend

  1. 24 minutes ago, SFLUFAN said:

    If Cohen had anything of value to offer the Special Counsel, then the SDNY would've offered a "cooperative plea agreement".  Let's not forget that it was the Special Counsel that referred the case to the SDNY after discovering information that was outside the scope of their investigation so the Special Counsel would've already had a pretty damned good idea of Cohen had anything of value for them.

     

    Cohen's got nothing.

     

    Yeah I think that's probably true as well. It seems like a hail Mary, particularly because unless he has concrete evidence to offer that they don't already have, his statement is pretty useless.

  2. https://nypost.com/2018/08/21/cohen-willing-to-tell-mueller-about-conspiracy-to-collude-lawyer/

     

    Quote

    Michael Cohen is willing to speak with Special Counsel Robert Mueller about a “conspiracy to collude” with Russia during the 2016 presidential campaign, his lawyer said on Tuesday night.

     

    Quote

    “Not just about the obvious possibility of a conspiracy to collude and corrupt the American democracy system in the 2016 election, which the Trump Tower meeting was all about, but also knowledge about the computer crime of hacking and whether or not Mr. Trump knew ahead of time about that crime and even cheered it on.”

     

     

    We'll see.

  3. I'm thinking a bit more about what having this much tensor core power on a gpu means for gaming. I came up with a bizarre idea that I think would be a lot of fun for modders. One of the current trends in DL is pixel to pixel networks. This is actually what their super scaling and AA tech is.  But one of the more fun things you can do is train artistic styles to turn regular images into something stylized. What this means for modders is they could train an art style and then apply it to a game to get a very different aesthetic. 

  4. 5 hours ago, CitizenVectron said:

    https://www.msn.com/en-us/news/politics/cohen-launches-gofundme-with-dollar500000-goal/ar-BBMhy2w?ocid=spartanntp

     

    Michael Cohen has launched a $500,000 gofundme page to raise funds for his legal team so that he can tell the truth about Trump. lol. I am 100% behind him going after Trump, but all I'm donating are:

     

    sutherland-springs-tx---november-5--law-

     

    :rofl:

  5. 3 minutes ago, Reputator said:

     

    "DLSS takes advantage of our tensor cores’ ability to use AI. In this case it’s used to develop a neural network that teaches itself how to render a game. It smooths the edges of rendered objects and increases performance."

     

    This sounds similar in premise to AMD's SenseMI technology, doing adaptive optimization and speculative execution, which is definitely something new and interesting in the world of GPUs. Granted, narrowed in focus to the task of anti-aliasing, but hopefully the concept opens up to more rendering tasks in the future. It could have implications for all types of rendering, raytracing or otherwise.

     

    The other DL based tech I mentioned earlier that this hardware could see making its way out of research and into actual games is "animation" systems where the movements are governed by intentional behavior that adapt to the environment. It's tech that both (1) makes the development pipeline much easier and (2) makes for more compelling correctly reactive results. 

     

    Here's a blog on the tech:

    https://xbpeng.github.io/projects/DeepMimic/index.html

     

    And a summary video:

     

  6. 2 minutes ago, Mr.Vic20 said:

     

    This is much better than I was hoping! Of course this is Nvidia's testing right? In which case it's a bit suspect. But if these numbers mostly hold out with more independent reviews, all the better!

  7. 26 minutes ago, Reputator said:

    https://www.pcworld.com/article/3299456/components-graphics/custom-geforce-rtx-2080-and-geforce-rtx-2080-ti-graphics-card-preorders.html

     

    https://www.tomshardware.com/news/wait-to-buy-nvidia-geforce-rtx-gpus,37673.html

     

    https://www.extremetech.com/computing/275859-dont-buy-the-ray-traced-hype-around-nvidias-rtx-2080-family

     

    Major publications all over are encouraging people to wait for reviews to see how these cards perform under current workloads (you know, despite the risks to "social decision making" or whatever). I'm glad there's a voice of reason spreading after the hype train. Moral of the story: don't believe smoke and mirrors until you see strong, convincing evidence.

     

    I'm completely supportive of waiting for independent reviews. If it tanks in a lot of important ways then we have problem! Until then though, everything points in the direction I want to see it pointing.

  8. 2 hours ago, TwinIon said:

    I mostly agree with this sentiment, but purely as a consumer and early adopter of tech, I'd caution against buying into the first generation, which this effectively is. I want to see it succeed, and I want to see ray tracing become the norm, but I think there is enough industry incentive to make that happen. It's a differentiating feature unlike anything we've re seen in graphics tech in recent memory, making it an obvious profit incentive for nvidia. It should simplify lighting models for game creators, giving them an unusual incentive for adoption (at least compared to other high end features), especially once incorporated into the standard game engines out there.

     

    The only issue facing it's eventual adoption is how far behind AMD is. If this is something AMD can compete at in relatively short order (a couple years at most), I can't think of a reason it won't become standard.

     

    I'm not especially keen on hoping it progresses without consumer support, but It is possible developers and Nvidia will try and force it through regardless. I would certainly be more comfortable if that ends up the case!

     

    I'm also still confused that more people are not excited about this, but were excited by the boring gains the 10 series brought.

     

     

  9. 2 hours ago, mikechorney said:

     

    The public goods game and the prisoner's dilemma are actually related.  I don't believe the decision to buy a 20-series card is an "iterated game".  And I also don't believe that the majority of gamers are striving for (or believe in) the "greater good" of ray tracing.

     

    They're related in the same way any number of different game theory games are related :p They are still different games with their own nuances. PD is not a good analogy for this situation because you can't really apply any insights about its dynamics to the discussion at hand because of the differences. PG is closer.

     

    Life is mostly an iterated game. Very few scenarios in life are actually well modeled by one-off games. Psychological modeling work has had to quickly abandon one off-games as models for this very reason.

     

    Gamers may very well not see ray tracing as the future of gaming that we should usher in. Hence why I'm speaking out about it!

     

     

    Quote

    Regardless, I don't believe there are external reward or punishment systems, here.  If this was a giant Kickstarter, with a commitment of millions of buyers required to sign up (and essentially collude), then I would agree with you.  No one is "banding together" here.

     

    There are a slew of social dynamics at play! Punishment is only one way that's easy to demonstrate. We have tons of other tools at our disposal that cause rational cooperation and humanity is actually receptive to them on many occasions!

     

    Quote

    Ray tracing is clearly capable of higher IQ than current techniques.  It is, however, much more computationally expensive.  I don't know where the "break-point" in technology that makes it worthwhile.  Has it happened already with the GeForce 20-series cards?  Or, is it even possible?  I don't know.  I would have to defer to hardware/software engineers who understand the intricacies of both types of rendering to speak to that.

     

    Good, I think this means the main question is really when can this change happen, and is this series actually the start of it?

     

    My estimate, which is provisional until we have more independent analysis, is if this series if well adopted it will result developers being able to get some worthwhile improvements out of it, but probably not as much at the very first day one release.

     

    The feature I was by far most excited by back when UE4 was first being announced was their dynamic GI using ray casting through sparse voxel oct trees (if we had our board history I could probably pull it up for you :p ). Eventually, they canned the feature though because there just wasn't enough processing power to do it. Clearly they didn't think it was a total pipe dream since they initially had it as a main selling point at announcement of UE4. With dedicated hardware like this, better GI of that sort and others are now far more feasible.

     

    I think it's telling that Nvidia didn't just do their own internal development, but got a lot of cooperation from MS with DX and Epic with UE to jump on this hardware and start supporting it. I have some insight being a computer scientist that causes me to be encouraged, but I am not a graphics expert. But all signs are pointing positive that the graphics experts do think this is a part we can start.

     

    And the features we could get out of the gate, like qualitatively better GI, shadows, etc. seem far more significant to me than what we had with the 10 series where for most people it was "look, now I can play this game at 90Fps maxed out instead of 70!" That's part of my bafflement in this response. Embracing this tech can yield far more appreciable changes in our gaming experience than the last gen and can pave the way for even greater changes. Yet it's being met with less interest than the profoundly boring 10 series revision.

  10. 1 hour ago, mikechorney said:

    Economic theory suggests people act in their own self interest, and make purchases that increase their utility.  As a classic prisoner's dilemma, game theory suggests that the majority people don't act in the way you suggest. 


    If I had a way of influencing the collective and somehow convincing a critical mass of people to, as a group, make a different choice, then perhaps it would be rational to spend money in this way.

     

    This isn't analogous prisoner's dilemma. The point I'm making is much more analogous to the public goods game.

     

    Second of all, in prisoner's dilemma, defecting is only a Nash equilibrium in finite games with a known horizon between two people. In iterated games (and especially in a population), the better strategy (and a Nash equilibrium) is tit-for-tat, which amounts to the concept of cooperating, but punishing those who defect.

     

    Third of all, a similar dynamic exists in the more analogous public goods games. If it's played in isolation, then to everyone's detriment the Nash equilibrium is indeed to not cooperate. But once you introduce other forms of social structures that disappears. For example, punishment systems or enforcing agencies become the rational decision for each agent to employ. We have at our disposal the means to communicate socially (and in more extreme cases, punish), which is the mechanism that causes rational cooperation in the public goods game. 

     

    Of course, that it's a rational decision to employ those methods doesn't mean people are rational actors. Indeed,  if you're stuck with irrational actors, you can only do so much. For example, you could have in your population an actor who unreasonably simply doesn't respond to any social structures, punishments, etc. And humanity does have to deal with the fact that some of our population is exactly that. People who are so stuck on their simple reasoning for whom nothing you do will budge them. In those cases the outcome is capped by how many rational actors there are.

     

    I have bothered to argue this point on this board because I tend to think higher of the people here.

     

    You seem to be taking the position that we don't have these social mechanisms to collaborate. But this seems clearly wrong to me. Society is more able to collaborate over large distance than we ever were before. Even gamers have been able to band together to cause change within the gaming industry. Fuck, many of us throw money at kickstarters which naturally comes with a high-level of uncertainty regarding the product we'll actually get.

     

    That we can see so many gamers actually accomplish what you are are claiming can't be done, and then turn their nose to what Nvidia is doing while rationalizing that our decisions don't actually make a difference blows my mind.

     

     

    Quote

    Operating under the assumption that ray tracing is the panacea for game graphics before it can be demonstrated, in real games, is a reach.

    You obviously have a lot more confidence in ray tracing's viability than I do. 

     

    Does ray tracing have the potential to produce better results than rasterization?  Yes, but I don't know how significant they are in real world games.

    When can those better results be done without unacceptable reductions in frame rate and resolution?  I have no idea.  Maybe today.  Maybe 3 years from now. Maybe 50 years from now.  Maybe never.

     

    Yes, and *this* is a valid argument against my position. I *could* be wrong about it. I don't think I am, no, but this is what I wish we were debating instead of getting stuck on whether we can make a difference with our purchasing decisions or whether long term consequences are actually something real and important to rational decision making.

     

     

    It's confusing to me why you're coming from the position that ray tracing isn't the future of games. (I also think the DL capabilities are important too, but I recognize I'm a bit more biased for other reasons to want that.) Can we start there? Do you really think it's at all likely that ray tracing will *never* be in gaming's future?

     

  11. 43 minutes ago, mikechorney said:

     

     

    My vote in an election is essentially free, a 2080TI costs $1,200.  That's not a particularly good analogy.

     

    No, it's still a perfectly fine analogy because the reasoning you're using is still the same faulty reasoning. "Well my individual contribution doesn't matter on it's own" is bad reasoning. Populations where agents adopt this reasoning do worse than populations who don't. This is a well studied dynamic both within game theory and psychology. The reasoning you described is bad and you should throw it away as fast as possible!

     

    Quote

    1.  My decision  to buy a card (or not buy a card) has no practical effect on encouraging behavior from anyone

    2.  Long term results don't mean anything on my short term decision.  I can but a card later, once those benefits become evident

     

    1. I have responded to this above.

     

    2. No, this is not a permissible response to the hypothetical. Decisions you, and others, make now, can have long term ramifications. In the hypothetical I gave you, failing to support the card causes a failure in further development, thereby causing a failure of developers delivering the gains you're hoping to find the in future.

     

    You realize that by conceding the hypothetical you're not conceding to my entire claim, right? There are lots of further nuances we can discuss, but we must first establish common ground on some fundamental properties, because at this point I'm worried that there a vast gulf between us that will otherwise prevent any further progress in the discussion. I really didn't think this would be a controversial hypothetical.

     

     

    Quote

    I don't know if the architectural decision on GeForce 20-Series cards are a good decision, or a disastrous diversion.  I've been hearing about ray tracing for 30-years (and had a friend playing with it in high school -- and I've been out of high school for a long time).  I remember this from that era:

     

     

    The only way I know if these are the right architectural decisions and are worthy of my hard earned $$$, are whether my experiences are better TODAY.  When the software is there, there is a rationale decision to be made about buying hardware.  These aren't consoles where, in the past at least, you could justify the purchase in knowing that there would be 5-7 years of fixed hardware.  Graphics cards are on an 18-month generation upgrade cadence.

     

    In a similar way, I actually believe that VR is the future -- but I thank god I haven't bought a headset, because there isn't enough good software to justify a purchase.  It would be sitting in the corner of my office gathering dust.  And when I do buy the hardware, I know that it will be significantly better than I would have otherwise bought.

     

    I spend too much money in a frivolous manner on stuff that actually makes a difference to me.  $1,200 is a big number, but not ridiculous, I just spent $500 for a one day outing for the family to ride roller coasters.  But I won't spend money to encourage NVidia to randomly innovate -- unless those innovations bring me tangible gaming benefits.

     

    If in a month, these cards are setting the world on fire with "OMG AWESOME" graphics, I'll be first in line to buy one.  But to be honest, I really don't expect that to happen.  

     

     

    Operating under the assumption that these are random frivolous innovations unless you see results immediately is a bad assumption.

  12. 2 hours ago, mikechorney said:

    I am willing to pay the cost for things that provide a tangible benefit for my gaming experience.  What benefits would I get from these cards?  If it results in marginally better shadows, at the cost of significantly lower framerate and resolution -- that's something I can't support.  But, I'll wait for the hands-on reviews to see how it performs on games.

     

    What we get is a function of how successful they are, both in terms of what we'll see possible on these specific cards, and down the road on future tech, to at least some degree.  If the cards don't have much impact, we're probably only going to see a handful of games with the most cursory implementations that they can add as a bullet unless developers are overwhelmingly happy to push it along despite consumers not buying in.

     

    Quote

    Fact:  Nvidia won't change any of their decisions if I (alone) decide to not buy a video card.  Not one single developer will decide to make (or not make) software decisions based on me buying a video card.  Other people buying cards is completely independent of me buying a video card.  Therefore, my choice to buying a video card has no influence on the future of "ray tracing".  The only rational decision (economically speaking) on whether to upgrade my card is whether the RTX-20 series provides tangible benefits to my gaming experience.

     

    By this same reasoning, voting is pointless, because your single vote wont matter. This thinking really worked out well for us in 2016.

     

    To be clear, no this scenario is in no way even remotely comparable in *significance* of voting or not voting in 2016. But the reasoning is very much the same and wrong for the same reasons.

     

    The only rational decision is whether buying the card and supporting it helps to usher in a future you want. I'm happy to debate that point. In fact, depending on what independent reviews reveal, I could very much change my tune! But pretending that our decisions have no impact on the future or that we shouldn't consider the future effects of what our purchasing decisions cause is a non-starter.

     

    Quote

    It is not simply true that we should support cards that lean into ray tracing.   If we learn that these cards somehow provide much better performance, than I expect, on "rasterized" games.  Or the ray traced games look significantly better than the SotTR demo indicated.  Or, whatever, we should all go out and buy these cards,

     

    However, if a 2080 performs only marginally better than a 2-year old 1080, and costs hundreds more, it doesn't make much sense for anyone to buy one before there is software available that justifies the experience.

     

    Let me paint a hypothetical to try and gain grounding on how we can go about discussing this, because I feel like the same things are just being said on either side of this discussion and none of us are making progress.

     

    Suppose at release it is as you described: parity performance and the initial ray-tracing and DL-enhanced games provide modest performance boosts. Now for the hypothetical suppose there are two possible futures.

     

    In one, only Vic and Stepee buy the cards and no one else in the world does. Because it's a massive failure, Nvidia goes back to making pure raster cards in response.

     

    In another future, the cards are a major success, developers continue to work on software that exploits the architecture, and in another year and half, we start seeing cards that even running on this initial 20x line look even better than what devs could do at the start and further down the road we get cards that can do even greater things.

     

    Claiming that these are the only possible futures is of course absurd. It could also be that the cards never really enable a lot more and what we see at the start is all we get. But I want to make sure we can at least agree that in this absurd scenario, it would be better if people got on board at the start despite things not being only modest improvements out of the gate for limited games. If we cannot agree on that, then we've got a much bigger bridge to cross.

     

    If we can agree on that though, then it means we've reach common ground on a couple sticking points that seem to keep coming up. Those being that

    1. encouraging other's behavior is an important element of decision making; and 

    2. that long term results are important factors beyond what we'll see immediately out of the gate.

     

    And if we can get beyond that, then we can move on to what I think are the far more relevant points. Those being things like "how likely is it that this architectural shift is important to gaming?" "How likely are Nvidia and developers to change strategy if the cards don't sell well?" "What level of current raster performance is an acceptable tradeoff for future gains?" etc.

  13. 1 hour ago, Reputator said:

     

    This statement contains a bunch of assumptions about the future you literally can't make. "Remaining top tier" isn't an argument in favor of buying a new graphics card. I don't care how much it titillates your I/O ports.

     

    I don't know it's the future, no. But I'm pretty damn confident staying with pure raster isn't going to lead us anywhere great and it's quite surprising that you think the status quo is okay and headed anywhere but a dead end.

     

    It would also be surprising to me if the future of graphics goes somewhere that's not ray-tracing nor raster. In all these years, ray tracing hasn't gone away. There hasn't been a different approach embraced unless you count hybrid approaches which is exactly what Nvidia is doing. At a certain point, the approximation model of rasterization hits a wall.

     

    And if the community not willing to support a solid take at something new because there is any uncertainty at all then we have a really long shitty road ahead of us.

     

    Quote

    No one was disappointed at the performance of the 10 series. Prior to that, we were stuck on 28nm for three generations, so there was some stagnation, albeit out of the hands of any IHV. Aside from you, however, I've not heard anyone lament the stagnation of features though.

     

    I'm not even talking about the 10 series alone. I'm talking about the overall trend of graphics. That means PC GPUs and console generations which are over longer intervals (and which these days are closely tied to what PC GPUs can do). I honestly don't know what to tell you if you think people haven't been expressing a sense of diminishing returns. I see it rather frequently.

     

    Quote

     

    You have a very utopian-like concept for the role of consumers. Be realistic. Few people have the money to throw away on an investment in the future with very few short-term gains. Most of us aren't venture capitalists. We don't "have to" encourage anything if NVIDIA and the developers fail to give us incentives beyond empty promises. If you think otherwise, I have some snake oil you might be interested in.

     

    I've previously said that if people can't afford those prices I sympathize with that entirely. And who said anything about "throwing away" the money? My very argument is that there is a long term benefit here that people should be incorporating to their decision making. 

     

    Again, you keep appealing to "have to" arguments when I'm very explicitly rejecting that entire line of thinking, so I don't know why you keep returning to it. Ignoring future consequences is, definitionally, myopic.

  14. 13 minutes ago, Reputator said:

     

    You'll be confused at any community disappointment if performance remains virtually the same, for the same cost? You GREATLY overestimate how much people care about the potential of raytracing.

     

    Lets be precise: I'll be confused at the community disappointment if performance on conventional games remains top tier, while providing a major advance in architecture that will yield some improvements now, and ultimately much greater ones.

     

    I don't think I'm overestimating anything. The stagnation of raster-based tech is immediately apparent. Both of the last two generations have yielded many people being disappointed at the graphical progress being made. That's because we're running out of room for what we can do with conventional tech. We need to change it up or it's really going to stagnate. 

     

    And it's not just raytracing. Having strong DL capabilities on board is going to have ramifications for gaming beyond their up-resing for limited rays (or resolution in general). This tech is moving faster than I would have expected, but with hardware like this, there are some important ways in which DL can be useful for gaming and gaming graphics. For example, by making intentional physically-based animation. (A tech important not just to the resulting actors in the game, but the whole animation pipeline being streamlined)

     

    Quote

    Again, purchasing a first generation of an unproven technology because it will encourage the technology to advance isn't the job of consumers. Bring the horse, then you can have the cart. It's not on us to assume the risk, which given the pricing, would indeed be the case for all but the top-end 2080 Ti.

     

    This is a next generation product. New features are nice, but improving the performance of last gen cards is expected. Not a bonus.

     

    "It's not on us" isn't a useful way to frame things. No one is "obligated" to do anything ever, so nothing is ever "on someone" to do. It's a simple question of "does supporting this encourage the future I want to see?"  If you do not think this line of tech is useful, that it doesn't represent a future we should want to see, then we can have that discussion, because we may disagree. But it's simply true that you have to encourage what you want to see more of independent of any "obligation" or who it's "on."

  15. 10 minutes ago, Reputator said:

     

     

     

     

    I did see that post and thought I had addressed it, along with others. I'll will try to clarify though.

     

    "Worse performance for the money" is an evaluation based purely on what these cards will do for games the moment they are released. I'm quite deliberately and explicitly making an argument that part of the reason we should get on board beyond the immediate value and and extra bonuses will see in near term is to (1) encourage this hardware development track further; and (2) encourage software developers to work out the kinks by providing an audience for them to develop to.

     

    Indeed, I have also explicitly stated that a more traditional raster-focused card, as you've suggested as an alternative, would be far more disappointing to me.

     

    As far as a trade off, I would be more sympathetic to the argument if these cards preform substantially worse than the 10x line. In fact, if 3rd party reviews reveal that the RTX really takes a meaningful hit compared to the 10x line, I'll be more understanding of people having further hesitation. Depending on the hit, I might wait as well!

     

    If they end up still being top of the line raster cards though, I will remain puzzled at any community disappointment. I don't think we can ask for better than maintaining existing parity while introducing a whole new tech. Hitting that parity at all is pretty great.

×
×
  • Create New...