Jump to content

Xbox Series X | S OT - Power Your Dreams, update: FTC case unredacted documents leaked, including XSX mid-generation refresh, new gyro/haptic-enabled controller, and next-generation plans for 2028


Pikachu

Recommended Posts

7 minutes ago, Duderino said:

Keep in mind that NVME drive should punch above it’s weight with the DirectStorage decompression:


When this tech eventually arrives on PC it will surpass the Series X, but the console will be competitive with today’s SSD speeds.

IMHO, current games don't benefit as much as they should from faster SSDs, because games aren't really designed around them (probably because the consoles HDDs are so slow).

Given that DirectX is available at the same time on PC as Xbox, games that utilize DirectStorage (in theory) should see a similar % improvement on both platforms (i.e. a PC with a PCIe 4.0 NVME later this year will be blazing fast).

Link to comment
Share on other sites

8 hours ago, crispy4000 said:

 

 

For the record, these were the most outlandish claims you made.  We now know the first was wrong.

I wasnt saying RT cores exactly like Nvidias   I was saying AMD's equilivent  to RT cores and the X has that.  Also guess you missed the part where they said they were using the MOST ADVANCED form of raytracing for the minecraft demo using Path Raytracing were the computational load was maxed.  DF also said that they werent expecting much from the X's raytracing just like you guys but then reported that they were very surprised buy the results they have seen so far.  

Link to comment
Share on other sites

2 hours ago, TomCat said:

I wasnt saying RT cores exactly like Nvidias   I was saying AMD's equilivent  to RT cores and the X has that.  Also guess you missed the part where they said they were using the MOST ADVANCED form of raytracing for the minecraft demo using Path Raytracing were the computational load was maxed.  DF also said that they werent expecting much from the X's raytracing just like you guys but then reported that they were very surprised buy the results they have seen so far.  


You literally said the RT cores MS are buying would be affordable.  There are no RT cores.

 

 

It’s best now to wait for DF’s RT speculation video considering what we don’t know.  It is somewhat surprising AMD got this far without their own version of the cores.  But again, performance is a question until we have better metrics.  Including how the 2060 currently stacks up on that build of Minecraft.  Or even how something like Control would compare.

Link to comment
Share on other sites

20 hours ago, crispy4000 said:


From the same Digital Foundry/Eurogamer article:

 


It’s as it says.  They’re developing new hardware processes to improve RT performance of standard cores, but it isn’t interchangeable with Nvidia’s method with tensor cores.

 

Its two different approaches to hardware-accelerated RT.  We got clarification of that today.  Performance comparisons will come later.

Just to clarify, the tensor cores don't do RT, they are used pretty much exclusively for DLSS at the moment.

That said, exactly what the Xbox RT hardware looks like or how it's doing it are still currently a mystery. Maybe it will be clarified during the PS5 reveal tomorrow, or maybe AMD is saving that info for their own announcement.

Link to comment
Share on other sites

7 minutes ago, crispy4000 said:


You literally said the RT cores MS are buying would be affordable.  There are no RT cores.

 

 

It’s best now to wait for DF’s RT speculation video considering what we don’t know.  It is somewhat surprising AMD got this far without their own version of the cores.  But again, performance is a question until we have better metrics.  Including how the 2060 currently stacks up on that build of Minecraft.  Or even how something like Control would compare.

dude I never said AMD will use RT cores.  I said they would have thier own version of RT cores.   I used RTcores as a reference  We dont know what AMDs Raytracing hardware is called so I'm using RTcores as a reference only.  I already said that RT cores may not be the best way to do Raytracing  you for some reason think that RTcores is the BEST and only way to get the same job done.   My point through all of this is maybe AMD and MS came up with a more efficient way to do Raytracing.   But keep shiting on everything positive about the X  its going to be a fucking BEAST

Link to comment
Share on other sites

2 hours ago, TomCat said:

dude I never said AMD will use RT cores.  I said they would have thier own version of RT cores.   I used RTcores as a reference  We dont know what AMDs Raytracing hardware is called so I'm using RTcores as a reference only.  I already said that RT cores may not be the best way to do Raytracing  you for some reason think that RTcores is the BEST and only way to get the same job done.   My point through all of this is maybe AMD and MS came up with a more efficient way to do Raytracing.   But keep shiting on everything positive about the X  its going to be a fucking BEAST

 

Yep, you did:

“also whats making the RT cores affordable for MS...”
 

This ‘own version of RT cores’ and ‘it was only for reference’ take is backpedaling.  

 

 

I’ve always been open to another method being more efficient than nVidia’s.  The problem is that it’s still such an unknown.  I don’t know if AMD’s new cards will be a “BEAST” for RT efficiency, I simply don’t have the faith you do.  I want to see the comparisons first, as we all should.

 

Personally, I’d rather not be stuck with a first generation RT product at the start of next gen, ie: something that performs like the first RTX cards did, other than perhaps the TI.  I’d rather wait for hardware that runs most next gen games with RT at faux-k.  Because that would be “BEAST” enough to feel like I wasn’t compromising resolution from how I play now.

 

That’s just my personal take though.  I’m fully expecting to wait to jump in.

Link to comment
Share on other sites

1 hour ago, TomCat said:

dude I never said AMD will use RT cores.  I said they would have thier own version of RT cores.   I used RTcores as a reference  We dont know what AMDs Raytracing hardware is called so I'm using RTcores as a reference only.  I already said that RT cores may not be the best way to do Raytracing  you for some reason think that RTcores is the BEST and only way to get the same job done.   My point through all of this is maybe AMD and MS came up with a more efficient way to do Raytracing.   But keep shiting on everything positive about the X  its going to be a fucking BEAST

So, after reading DF's write-up.  Your interpretation was that SeX has something different than regular shader cores running custom software?

Link to comment
Share on other sites

 

16 minutes ago, AbsolutSurgen said:

So, after reading DF's write-up.  Your interpretation was that SeX has something different than regular shader cores running custom software?

Havent actually read it yet.  I've been saying the whole time that MS and AMD have thier own hardware version of doing Raytracing that will perform better then RTcores. IS that true I dont know yet but its what I've deduced from the shit I've read

Link to comment
Share on other sites

2 hours ago, TomCat said:

My point through all of this is maybe AMD and MS came up with a more efficient way to do Raytracing. 
 

 

24 minutes ago, TomCat said:

I've been saying the whole time that MS and AMD have thier own hardware version of doing Raytracing that will perform better then RTcores.


You can’t even keep your story straight.  It keeps changing.

Link to comment
Share on other sites

16 minutes ago, TomCat said:

both of them are saying the same thing. efficiency is gaming equals more power


maybe =/= will

 

Do you believe their method is ‘maybe’ more efficient, or that it ‘will’ be more efficient?

 

You couldn’t have meant both from the beginning (or now).  Perhaps your mind changed in the <2 hours between those posts.

Link to comment
Share on other sites

just some quotes from the article

 

 so, we're interested in how the ray tracing hardware can be used to take techniques like this and then move them out to utilising the DXR cores. 

 

This is hugely exciting and at Digital Foundry, we've been tracking the evolution of this new technology via the DXR and Vulkan-powered games we've seen running on Nvidia's RTX cards and the console implementation of RT is more ambitious than we believed possible.

 

now this one is very interesting. Says amd and MS came up with a better way to do what tensor cores are doing for nvidia.  

 

The RDNA 2 architecture used in Series X does not have tensor core equivalents, but Microsoft and AMD have come up with a novel, efficient solution based on the standard shader cores. With over 12 teraflops of FP32 compute, RDNA 2 also allows for double that with FP16 (yes, rapid-packed math is back). However, machine learning workloads often use much lower precision than that, so the RDNA 2 shaders were adapted still further.

 

what I've been saying is MS and Amd came up with a better way of doing raytracing with thier HARDWARE ACCELERATED version. I was calling them Rtcores  but We now know they are DXR CORES

Link to comment
Share on other sites

For the record:

 

1) You believe that AMD has developed a more efficient way to handle RT than Nvidia’s RT cores, correct?

 

2) If so, when would you expect AMD’s PC cards to generally outperform Nvidia’s offerings?

 

3) If they could, should Nvidia pivot to AMD’s approach?

Link to comment
Share on other sites

I'm expecting AMD to do to nvidia the same they just did to intel give them a kick in the ass.  the pc cards will outperform nvidia cards. From what I'm hearing Amd is skipping 7nm+ and jumping to 5nm. Thats why there was the confusion on what level xbox would be manufactured at.  SOC for xbox will use 7nm enhanced which is basically a hybrid of 7nm+  they get the full benefits of rdna2 but only get a 15% advantage in density. 

Link to comment
Share on other sites

I hate to interrupt this discussion about interpreting a bunch of PR Tech speak, but perhaps we should consider how Minecraft RTX is expected to run on nVdia cards?

 

Minecraft With Ray Tracing Will Likely Require RTX 2060 or Better for 1080p at 60 FPS

 

If true, that puts the current Series X efforts below the 2060, given Digital Foundry’s 30-60 FPS they are reporting.

 

That’s still quite a feat for what is likely a less costly GPU. If you have inflated expectations though this news might come at a bit of a disappointment.

Link to comment
Share on other sites

1 hour ago, Duderino said:

I hate to interrupt this discussion about interpreting a bunch of PR Tech speak, but perhaps we should consider how Minecraft RTX is expected to run on nVdia cards?

 

Minecraft With Ray Tracing Will Likely Require RTX 2060 or Better for 1080p at 60 FPS

 

If true, that puts the current Series X efforts below the 2060, given Digital Foundry’s 30-60 FPS they are reporting.

 

That’s still quite a feat for what is likely a less costly GPU. If you have inflated expectations though this news might come at a bit of a disappointment.

I'm not sad at that fact at all. cause we got an unoptomized version of the experience.  Kinda like when games first started using raytracing the frame rates were abysmal but are getting better over time.  the same will happen with what MS and amd  is doing with Raytracing.   what you guys are forgetting is MS have the best of both worlds  engineers and great software makers. but keep underestimating them.

 

LOL after I said that here you go.  Sounds like my thinking

 

 

 

 

The more staggering metric was the 380 Billion intersections/sec. 2080 TI is capable of only 10 Billion intersections/sec. It was noted however that the XsX intersections/sec does not equate to gigarays/sec which is unlike the 2080 TI metric. Not sure why NVidia can claim 10 Billion intersection/sec=10 gigarays/sec. It seems the XsX metric is different in some way???
 
 
 
One other big fact the ray tracing demo we viewed was done by ONE person
 
 
 
 
 
 
Link to comment
Share on other sites

The unavoidable truth about all this, regardless of DXR cores or RT cores or anything else, is that this will be a transitionary period for ray tracing. The exciting thing about the Minecraft demo is that all light was ray traced. Build a game, put in light sources, automatically get awesome lighting. No need to build a whole new layer of "artificial" light maps and whatever else unless you really want to for whatever reason. That's a very exciting future and it basically could me better looking games for a lot less effort. However, it seems we're far from that point.

 

Ray tracing this generation will be something that talented devs do well, and those without the resources or inclination will largely ignore. The quality of lighting in general, and ray tracing specifically, will continue to vary a whole lot from game to game. Beyond that, I think we'll just have to wait and see how well each approach works, and how much effort developers end up putting into it.

 

All of which is to say, these technical details do little to change my mind as to where ray tracing was going to be this gen. Maybe it works out a little better than I expected and we see a bit more of it, but this will be the generation of ray traced games in the same way that this was the generation of 4k games. It started happening, some games did better than others, but for the most part it was incomplete.

Link to comment
Share on other sites

1 hour ago, TwinIon said:

The unavoidable truth about all this, regardless of DXR cores or RT cores or anything else, is that this will be a transitionary period for ray tracing. The exciting thing about the Minecraft demo is that all light was ray traced. Build a game, put in light sources, automatically get awesome lighting. No need to build a whole new layer of "artificial" light maps and whatever else unless you really want to for whatever reason. That's a very exciting future and it basically could me better looking games for a lot less effort. However, it seems we're far from that point.

 

Ray tracing this generation will be something that talented devs do well, and those without the resources or inclination will largely ignore. The quality of lighting in general, and ray tracing specifically, will continue to vary a whole lot from game to game. Beyond that, I think we'll just have to wait and see how well each approach works, and how much effort developers end up putting into it.

 

All of which is to say, these technical details do little to change my mind as to where ray tracing was going to be this gen. Maybe it works out a little better than I expected and we see a bit more of it, but this will be the generation of ray traced games in the same way that this was the generation of 4k games. It started happening, some games did better than others, but for the most part it was incomplete.

 

 

There could be a weird dynamic in which indie games that otherwise are technically unimpressive due to budget restrictions end up having outstanding lighting because they can computationally afford to go full (or at least more) ray traced :p 

Link to comment
Share on other sites

On 3/17/2020 at 7:00 AM, number305 said:

Well... Go measure :

 it measures 301mm tall and 151 mm in depth and width

So I have 17” wide and around 6” tall. 
 

i would have to lay it on it’s side and then there would only be a few inches clearance on the top for the cooling fan. 
 

not really a great design for entertainment consoles. 

Link to comment
Share on other sites

3 minutes ago, atom631 said:

So I have 17” wide and around 6” tall. 
 

i would have to lay it on it’s side and then there would only be a few inches clearance on the top for the cooling fan. 
 

not really a great design for entertainment consoles. 

if you lay it on its side the cooling fan is on the left

 

Link to comment
Share on other sites

5 minutes ago, TomCat said:

if you lay it on its side the cooling fan is on the left

 

Right. So if I put it all the way to the right I’ll have around 5-6” of space from the fan to the side of my unit. But then it just hits a wall. Really not a good idea as that hot air will just sit there.  I literally may have to lay it front to back so that maybe the fan is facing out into my living room. 
 

i dunno. I’ll figure it out. Life....finds a way. 

Link to comment
Share on other sites

@AbsolutSurgen  so you just go ignore the part where the 2080ti can process 10billion rays a second  and the X can do 380billion a sec.  

 

All told, NVIDIA claims that the fastest Turing parts, based on the TU102 GPU, can handle upwards of 10 billion ray intersections per second (10 GigaRays/second), ten-times what Pascal can do if it follows the same process using its shaders.

 

 In short, in the same way that light 'bounces' in the real world, the hardware acceleration for ray tracing maps traversal and intersection of light at a rate of up to 380 billion intersections per second.

Link to comment
Share on other sites

1 hour ago, TomCat said:

@AbsolutSurgen  so you just go ignore the part where the 2080ti can process 10billion rays a second  and the X can do 380billion a sec.  

 

All told, NVIDIA claims that the fastest Turing parts, based on the TU102 GPU, can handle upwards of 10 billion ray intersections per second (10 GigaRays/second), ten-times what Pascal can do if it follows the same process using its shaders.

 

 In short, in the same way that light 'bounces' in the real world, the hardware acceleration for ray tracing maps traversal and intersection of light at a rate of up to 380 billion intersections per second.

How do "intersections" and "rays" relate to each other in terms of GPU power?  I'll be honest, I don't understand the math/process well enough to translate between the two of them.

Link to comment
Share on other sites

I finally got time to watch some of the XSX videos from the other day, and the backwards compatability stuff got me pretty excited.  I have such a backlog of Xbox One games, that I can't wait to be able to visit on the X1X.  With promises of pushing many of those older titles to native 4k, adding in some kind of machine learning HDR to games that didn't have it originally, all stuff that sounds like MS is really looking at all aspects of their ecosystem.  

 

A lot of the other stuff I don't truly understand, but it sounds like a very well thought out and made machine.  

 

The proprietary SSD is kind of a bummer, but until we see prices I can't really judge.  I really do like the idea of having something easily accessible, and easily interchangeable, over having to potentially open the box and switch out internal SSDs.  I hope prices are reasonable, and we can get SSDs that are larger than 1TB relatively quickly.

Link to comment
Share on other sites

I now want each company to do what the other one just did. I want the lead Xbox architect to give an in depth talk on the nitty gritty details of their system and the reasoning behind the decisions. Digital Foundry quickly ran through it's hardware 3D audio, data IO rework and texture streaming optimization, but I want more details.

I also want Sony to give us the teardown of the system so we can see how it looks and how they put it together.

Link to comment
Share on other sites

I thought the DF article was about as detailed on the XSX as the presentation Sony gave on the PS5.  Sony just added a lot of history and context for people to follow along, particularly with the audio, but MS gave us pretty detailed specs and (a short version as to) why they did it that way.

 

My take-away differences:

 

CPU/GPU performance:

-Microsoft targets consistent clockspeeds and built the console and heatsink to handle the worst-case scenario as the power requirements will fluctuate.

-Sony targets a consistent power delivery by offering adjustable clockspeeds.  So the system (or devs, presumably) can allocate more clockspeed/power to the GPU if the CPU doesn't need it, and vice versa.  They should be able to build a smaller form factor since they know what the worst-case power scenario will be, and the PS5 SoC should use less power anyway allowing them to use a smaller heatsink/fan.

 

GPU features:

-Both seem to use a form of mesh shading

-Both seem to have the same Ray Tracing acceleration built into the shaders of the GPU.

-Microsoft mentioned Variable Rate Shading while Sony didn't, but I don't see why the PS5 won't be able to do it.  I think this is just an assumed feature much like 120 Hz and Variable Refresh Rate support, neither of which Cerny talked about.

 

Overall the Xbox will be slightly more powerful than the PS5 at its max potential, but this is really not going to be a difference you'll notice.  It's a smaller difference than the XboneS and PS4 slim as well as the difference between the PS4 Pro and X1X.  My guess the biggest noticeable difference that could emerge will be in the Xbox's ability to dedicate more hardware horsepower for Ray-Tracing effects.

 

Storage:

PS5's SSD capabilities will be up to twice as fast the XSX.  While going about redesigning storage access methods in slightly different ways, both companies are effectively doing the same thing in having hardware-accelerated decompression to keep the CPU from being overwhelmed.  In Sony's description they talked more about the custom I/O hardware they created in order to achieve this while in Microsoft's description they talked more about the custom I/O software they needed to create, but in either case you need both hardware and software to work together so the end result is the same goal:  Both companies want the SSD to act as extended memory in order to free up actual system memory for data that is actively being used, as in current gen games most of the data sitting in memory isn't actually needed; the data is just there as a buffer to keep the frametime from spiking in case it needs to fetch the data from the currently slow as shit drives.

 

Memory:

The differences are basically a wash.  Microsoft does have overall faster memory, but it also needs the faster memory in order to keep its faster CPU and GPU fed properly.  The amount is the same for both consoles, but again each company is trying to free up memory by utilizing the SSDs.  Sony's SSD is faster so it can brute force this method better.  But Microsoft did mention one technique they are going to use to free up even more memory that Sony didn't, and that's Sampler Feedback Streaming.  This method will work concurrently with the SSD by having a less-memory-intensive, lower quality texture loaded into memory and utilized in a final rendered scene while it waits for the SSD to load in the higher quality version.  Maybe Sony doesn't need this method since its SSD is fast enough to... not need it? The end result is there will be a bigger generational leap in effective RAM games can utilize than what the paper specs show us - the same deal with how the performance difference will definitely be more than what TFLOP increase shows us because 1 RDNA2 Compute Unit is more efficient than a GCN Compute Unit.

 

Audio and VR:

Clear advantage to Sony, as the only mention of audio from Microsoft was that it would be running on the generic CPU cores while Sony has a dedicated chip for processing MOAR sounds than you've ever heard in your life.

 

Not mentioned in the DF article I linked, but I stand corrected in that both consoles will have dedicated audio processors.  Microsoft didn't provide any details beyond that, so who knows which one will be better.

 

Backwards Compat:

-Microsoft wins here by guaranteeing multiple generations back, but despite some of the meme's I'm seeing Sony will at least support PS4 compatibility.  However they will have to verify games, much like how the Pro couldn't run every game in boost mode when it launched.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...