NextGen

Member
  • Content count

    13,739
  • Joined

  • Last visited

Community Reputation

697

About NextGen

Contact Methods

  • Website URL http://steamcommunity.com/id/ngdot24

Profile Information

  • Gender Male
  • Location Milwaukee, WI
  • PSN ID Canownage
  • Steam ID ngdot24

Recent Profile Visitors

32,726 profile views
  1. Fuck raid...

    Oh, emulated RAID? That's somehow worse than software RAID!    Don't ever use it. Ever. 
  2.   I think @Sanborn is using something very similar for a Raspberry Pi, it's basically the same thing as what I do with pfsense;   http://jacobsalmela.com/block-millions-ads-network-wide-with-a-raspberry-pi-hole-2-0/   https://vimeo.com/135965232
  3. Fuck raid...

    I mean you aren't using a dedicated hardware RAID card, like something from LSI?
  4.   Do it at the router level with pfesne. Win. 
  5. Fuck raid...

      I'd ask for their resignation on the spot if they planned on doing that for any sort of mission critical data storage.    As for @mach250's issue, did you do some sort of software RAID? 
  6. Found my next MMORPG

    Does it have the Nintendo seal of quality? 
  7. Fuck raid...

      Holy shit no way 
  8. In my opinion, Tomb Raider 2016 is the best looking game on PC.
  9. Free games and shit

    Splinter Cell would be amazing!
  10. Hitman will feature DX12, Async Compute*

    This is going to be an ongoing issue for Maxwell/Kepler and even Fermi (I think?)  era cards as the industry continues to shift to DX12.   All of this was so Nvidia could check a box about TDP. It was shortsighted then, and shortsighted now. I honestly feel bad for people who own these cards, and purchased them with the intention of using them for any length of time, it's just simply not possible with DX12. I do have a strong inclination that that was why TR'16 launched on PC as a DX11 title, despite being DX12 on XB1. Nvidia got their hooks in the PC version, gutted DX12 and released it as DX11. Traces were left over of DX12 on the launch version for a reason. Probably because VXAO wasn't/isn't ready for prime time yet, and Nvidia didn't want to get lambasted in benchmarks for a sponsored title. Don't get me wrong, the game was fucking amazing regardless, but it's just observational on my part. Could be totally wrong, but man.    This is a real issue. This is going to have huge implications for PC gaming as a platform if the industry makes a quick transition to DX12. 
  11. Hitman will feature DX12, Async Compute*

    You're focusing on the wrong point of the thread.    @Mr.Vic20 hit it square on the head. 
  12. Hitman will feature DX12, Async Compute*

    It's only inclusive because Nvidia left off the required hardware, to effectively market how efficient their GPU's are.    Again, it was Nvidia's engineers that never gave Maxwell a hope in hell at running DX12-proper. Not DX12 itself. Not AMD. Not me, not you. Nvidia. This is planned obsolesce.  
  13. Hitman will feature DX12, Async Compute*

    AMD didn't "invent" DX12 with the intention of crippling Nvidia hardware, then prevent Nvidia from using any of its features. That's the fundamental difference between it, and DX12 in this situation.    DX12 isn't an AMD technology. DX12 is simply leveraging hardware at the GPU level. Hardware that Nvidia decided to leave off their products because the necessary ACEs would have dramatically increased TDP and thermals..to, AMD GCN levels. This was a design choice from Nvidia, not "sabotage" by AMD. 
  14. Hitman will feature DX12, Async Compute*

    A fundamental component to DX12 isn't reverse Gameworks.
  15. *Assuming you have a GPU that supports Async Compute in the first place. That means you need GCN.    Story can be read here: http://www.pureoverclock.com/2016/02/hitman-taking-a-contract-on-asynchronous-shaders/   --   AMD's advanced Graphics Core Next (GCN) architecture, which is currently the only architecture in the world that supports DX12's asynchronous shading. We can see this as fact, in the Nvidia developer blog here; https://developer.nvidia.com/dx12-dos-and-donts   Don’t toggle between compute and graphics on the same command queue more than absolutely necessary. This is still a heavyweight switch to make.    Further proven by their official #Gamework VR Guide (page 31) https://developer.nvidia.com/sites/default/files/akamai/gameworks/vr/GameWorks_VR_2015_Final_handouts.pdf   All our GPUs for the last several years do context switches at draw call boundaries. So when the GPU wants to switch contexts, it has to wait for the current draw call to finish first. So, even with timewarp being on a high-priority context, it’s possible for it to get stuck behind a long running draw call on a normal context.   Maybe Pascal?