Jump to content

legend

Members
  • Posts

    29,602
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by legend

  1. 3 hours ago, Mr.Vic20 said:

    Yes, its true! I also expect we won't see the DLSS support out of the gate either. The whole launch seems like Nvidia only recently bothered to let others know what they we up to regarding the new tech. Which totally sounds like Nvidia's typical half baked launches. 

     

    In this case it sounds like MS is the holdup since it depends on DirectX. Working on top of DirectX sounds like the right move, and ideally, Nvidia would have timed the release, but that may have been difficult.

  2. 17 hours ago, Anzo said:

    Phone free calls with the LTE model

    Push notifications on your wrist (texts, emails, app specific messages)

    Walkie-talkie functionality with other Apple Watch users.

    App specific complications (Baseball game scores, air quality indices etc.)

     

     

     

    Thanks. I kind of want one just because I like tech like this, but whenever I think about it, it's never clear it would do anything for me. Though I can see how those features would be useful for some, sadly I don't think they'd do anything for me.

  3. 19 minutes ago, sblfilms said:

     

    But would they say a *single* past action is the best predictor, or the collection of behaviors over a person’s entire life? How do you weight different eras of a person’s life with regards to determining the character in the now?

     

    Depends on the action. Some actions I wouldn't expect to see especially frequently, even by a bad actor, because there is an inherent risk that would result in them only happening in the right circumstances. Such a property (when coupled with other dynamics) can result in a single observation actually having a lot of evidential weight.  

     

    Some single actions are also significant enough to require a fundamentally bad characteristic of the actor and would otherwise be incredibly uncommon without that trait.

     

    I'd put rape under both of the above.

     

     

     

    To be clear, I say this knowing I'm not an expert. This belief is merely a result of a mixture of my own experiences and an extrapolation from a collection of interactions with psychologists that were not explicitly about this topic. Consequently, I don't make this claim with a whole lot of confidence.

     

    Quote

    I also think we need to be sure we aren’t conflating character traits with personality traits. Personality is much more set and likely hereditary to a large degree, character far less so.

     

    Hmm perhaps so. Can you elaborate more on the distinctions between these or point me in the right direction of a discussion on them?

  4. 4 minutes ago, sblfilms said:

     

    What are some areas in which you see a direct connection between teenage behavior and late adult behavior? I do think life circumstances can play a role from one period to another, such as the differences cause by dropping out of high school vs. graduating from a good university.

     

    It's difficult for me to slice it up into specific well defined areas. Most people I know, though evolved, are still clearly the same person with many of the same traits. I'm also frequently told by psychologists that past behavior is still the best predictor, even if it's not certainty.

     

    I'm open to changing by tune if we can find some studies on the topic.

  5. 5 minutes ago, sblfilms said:

     

    I don’t think, and I would argue the science of brain development backs this up, that anything you do as a 17 year old has any relation to your character as a 50 year old. These things aren’t set in stone, character develops over time. Really bad people turn into really good people, and the reverse is certainly true as well.

     

    I think you're right that a person can change a lot. But I don't think teenage behavior is completely uninformative (in a statistical sense) either. Consequently, I'm not sure we should be taking that risk for such a high ranking positions unless the person really is quite spectacular.

     

    Of course, we probably shouldn't be considering anyone who is not quite spectacular for a high ranking position to begin with, so maybe the problem here is deeper :p 

  6. 1 hour ago, Mr.Vic20 said:

    Very cool ! The long term benefits of this tech are the only way I see us ultimately getting to 8K and beyond! Intelligently regulated asset quality management, as apposed to a brute force technique. And I'm particularly glad they are getting on this now, "before its ready" (:p)! Can you imagine the kind of hardware needed for 8K at 60fps?! Good lord, even I know to avoid the death bath that will be the push for 8K, and that is saying something! :lol: And for those who believe we will merrily bop along using the classic "Go smaller to go faster", we are rapidly running out of runway for that particular approach. Economizing leaps in power will become increasingly challenging with existing tech approaches.  

     

    Yeah without stuff like that 8K is a pipedream for a long time :p Also VR could stand to benefit a lot from these effective resolution increases.

  7. Immediate reaction to a black superman: "I dunno, seems stupid."

     

    Thought having reflected: "Actually, fuck it. Lets do it. Who actually gives a shit? And if they really wanted, they could even incorporate america's racist tendencies against blacks to shape an even more interesting story about america's relationship with a god-like figure who is black."

  8. I don't think Nvidia has released a paper on their specific implementation. But there are plenty of papers that work on this topic. Just do a search on it:

    https://scholar.google.com/scholar?q=deep+learning+super+scaling&hl=en&as_sdt=0&as_vis=1&oi=scholart

     

     

     

     

    Also, please see my edit as well. I'm not talking out of my ass about this. I'm an expert in this field. Please don't tell me how neural nets work :p 

  9. 31 minutes ago, mikechorney said:

     

    1.  Super-scaling does not require more knowledge.  By definition, you have the knowledge already with super-scaling

     

    Okay, see, this is why I asked the question to start. You're misunderstanding what's going on and that's probably why you're having trouble understanding the answers I'm giving you. No, the information is not there. What the DLSS is doing is taking a low resolution image and upscaling it to a higher resolution. The information is *not* there in the image because no where in the active rendering pipeline did the system render the scene at a higher resolution. It's rendered at a low resolution and the DLSS estimates from that low resolution what a higher resolution image would look like.

     

    There is a fairly large body of work in deep learning looking at how to make reasonable "guesses" and what Nvidia is doing is applying that tech in a way custom tailored for games.

     

    31 minutes ago, mikechorney said:

    I don't believe this is how neural networks work.  My understanding is that neural networks "learn" by testing algorithms (generally at random), evaluating the results, and reinforcing those algorithms that provide better results.  By providing more data (rather than less), the algorithms get better and more efficient. 

     

    Um... dude. Do you realize what I do? I'm expert in machine learning. I work on deep learning networks daily. I've taught and mentored students in this topic and am the research lead at my company doing reinforcement learning with deep neural networks for robotics.

  10. 56 minutes ago, mikechorney said:

    I am not asking about whether super-scaling or anti-aliasing is better.  I am wondering why super-scaling needs to tailored on a game-specific level?

     

    I'm asking you those question to try and walk through why you're not understanding because I already answered why but you didn't understand :p

     

    I'll give the answers to the questions I posed, but if you would have actually given different answers to those questions, what I'm going to say won't help you, because there is something else you might be misunderstanding. And so if you still don't understand after this, you need to work with me here to help you understand, which means answering the questions I ask to walk you though the logic.

     

    Q1: "Tell me, if you play doom, will you ever see in the game an image of my office?"

    A: No. Doom will show only a very specific set of images much different than many other images like my office, race tracks, space battles, etc.

     

    Q2: "Why are current anti-aliasing techniques that don't render in a higher resolution and down scale inferior to actually rendering in that higher resolution?"

    A: Because there is missing information in a low res image. Consequently, running smoothing techniques or anything less of fully rendering at a higher resolution will never recover what all of the actual missing information is.

     

    From the answers what we've revealed two things.

    1. From Q2 we conclude that to do better superscaling, we need to some how have access to more knowledge about what would actually be in the higher scale image.

    2. From Q1 we know that if there is only one game we're trying to work with, the total information it defines is *FAR* less than the total information covered by all games.

     

    When we train a neural network we're compressing information from the source dataset down into a more compact form. In this setting, what that means is the neural net encodes information from the whole set of images from the game that you showed it. As a result, using a neural net we evade the missing information problem faced by anti-aliasing: the structure and weights of the neural net actually give us access to information that isn't in the single current frame we're looking at!

     

    And if there are a lot of regularities in the training data, that is, the total information in the target task is small, then we can do a lot of compression and we also do not need a lot of examples. If outputs smoothly interpolate between two examples, we don't need to see what the outputs are inbetween values.

     

    Because a single game has a shit ton more regularities in its outputs than the space of all games, the neural net can much more effectively compress only the information that game constructs and we don't need to see as many examples, because we can quickly latch onto the general regularities and safely exploit them across.

     

     

     

    It's *possible* that Nvidia does have enough data from existing games to make a single one-size-fits-all neural network for superscaling. But it will always be computaitonally cheaper and require less data if you only have to worry about a narrower distribution (that is, on a game-to-game basis). Consequently, if Nvidia is requiring per-game support, it's probably because trying to make a monolithic single network fits all is either too computationally expensive or they simply don't have enough data to do it with the same level of quality as doing a per-game network (or maybe not as good as tuning the net on a per-game basis with game-specific data).

  11. 4 minutes ago, mikechorney said:

    Yes.  Infinity is bigger than almost infinity.

     

    The numbers are absolutely not infinity in either case. They're finite in both cases and one is vastly smaller than the other.

     

    This still works for infinity, but I'd have to think about how to explain in words without invoking mathematical concepts like Rademacher complexity :p So for now, lets stick to the easier to discuss scenario, that also happens to be the scenario we're in.

     

    Tell me, if you play doom, will you ever see in the game an image of my office?

     

    Quote

    I still don't understand why scaling an image would materially differ from game to game.  On a 3D rendered image, I don't understand what about a specific engine/game would make the "guessing" any different on a sub-pixel basis.  I'm not challenging you -- just trying to understand your POV to increase my own knowledge.

     

    Answer me this: why are current anti-aliasing techniques that don't render in a higher resolution and down scale inferior to actually rendering in that higher resolution?

  12. 3 minutes ago, mikechorney said:

    What makes that function simpler if it's game specific?  Any game can still make a near-infinite number of individual frames -- so the function is still using "rules".  Why would the "rules" on BFV be any different than CoD?

     

    Which is bigger, the space of all images that a game can generate from playing it, or the space of images of all possible games ever made?

  13. 17 minutes ago, mikechorney said:

    Why does anti-aliasing require less data if it is game specific?  I would have thought the process was more generic.

     

    It's not anti-aliasing (except only in a more broad definition of anti-aliasing that isn't going to help in understanding the issue). It's a machine-learned neural net that does super scaling. It's "guessing" about what data is missing in the low res image from only the low res image.

     

    Machine learning of this sort says "Find a function F, such that F(x) = y for a  number of examples of x and y"

     

    For illustration, lets not worry about images. Suppose we were finding a function of two inputs to one output. We give to the machine learning algorithm the following two pieces of data:

     

    F(3, 2) = 7

    F(1, 4) = 7.1

     

    If I then ask you want the value is for F(2, 3), you can probably give a guess that's not bad, because those inputs are pretty close to the examples we saw. For example, a guess of around 7.2 might not be unreasonable. 

     

    But what if I asked you about F(203, -6.3)? Well, you can maybe extrapolate pretty far, but you're going to be less confident about the answer here. Especially if the function ends up being really complex even around the examples you've seen. If we wanted to be able to make predictions about those numbers too, we really ought to have provided more data that covered that space.

     

    If you want a single general purpose neural net that works for every game without new training, you're going to have to provide huge amounts of data that will well cover the space of all conceivable game images. But if you know you only have to learn the function for a narrow range of images--those generated by a specific game, then you can just grab the data for it and only it, and custom tailor the learned function for it.

  14. I'm deeply annoyed that to get the smallest screen size I have to pay more. I don't want a giant fucking phone. My 6 barely fits in my pockets as it is.

     

     

    1 hour ago, Emblazon said:

    There's zero reason not to go with the XR. 

     

    I will say the X design is significantly better than all the iPhones with home buttons. If you don't already have an X, it's more than worth the upgrade to one of the three X models just for the UI and Face ID. 

     

    I'm just catching up, why is there zero reason to go with the XR? Per feature, sure, but isn't the reason to pay less if you don't care about the rest?

     

    Disregard, I misread.

    • Like 1
  15. 3 hours ago, mikechorney said:

    Why would the data be game/engine specific?  Wouldn't the data be more generic?  (i.e. wouldn't the "weights" be pretty standard across the board?)

     

    Neural nets are only as good as the data you give them. If you wanted one single purpose neural net that worked well for all games you'd (1) need a very large neural net to be able to express everything ; and (2) need a truly absurd amount of data to cover all variations in images that games span. 

     

    Is there enough data available to train such a net? It's possible but I'm not sure. You get much better guarantees if you train on the actual data you want to upscale and you can do it with a computationally simpler net. 

×
×
  • Create New...