Jump to content

Xbox Series X | S OT - Power Your Dreams, update: FTC case unredacted documents leaked, including XSX mid-generation refresh, new gyro/haptic-enabled controller, and next-generation plans for 2028


Pikachu

Recommended Posts

Dont know why yall so full of doubt.  AMD has said that thier RDNA2  GPU is going to be DISRUPTIVE to the 4k market.  the gpu in the X is based on this tech. MS helped AMD dev their tech. It will perform better then raytracing on the 2080.  People didnt believe AMD would do what they are doing to intel at the current moment. They are about to do the same thing to nvidia. If they can get their drivers in order they will be golden

 

Link to comment
Share on other sites

17 minutes ago, TomCat said:

Dont know why yall so full of doubt.  AMD has said that thier RDNA2  GPU is going to be DISRUPTIVE to the 4k market.  the gpu in the X is based on this tech. MS helped AMD dev their tech. It will perform better then raytracing on the 2080.  People didnt believe AMD would do what they are doing to intel at the current moment. They are about to do the same thing to nvidia. If they can get their drivers in order they will be golden

 

I can't remember the last time they actually had a GPU that even competed for the high end.

Why do people doubt AMD's ability to compete vs. Nvidia?

amd_vs_nvidia_marketshare_long_term_q320

Link to comment
Share on other sites

On 2/24/2020 at 9:32 PM, Dodger said:


You’re gonna need a beast of a pc to match this thing when it launches 

 I already have a PC that likely beats this. Once the 3080ti comes around it will crush it.  With that said, this is very exciting for a $500 machine. 

Link to comment
Share on other sites

53 minutes ago, TomCat said:

Dont know why yall so full of doubt.  AMD has said that thier RDNA2  GPU is going to be DISRUPTIVE to the 4k market.  the gpu in the X is based on this tech. MS helped AMD dev their tech. It will perform better then raytracing on the 2080.  People didnt believe AMD would do what they are doing to intel at the current moment. They are about to do the same thing to nvidia. If they can get their drivers in order they will be golden

 


We’ve seen absolutely none of the pudding to substantiate this claim.  Not only that, but you’re literally the one only claiming it.

 

The only sensible takeaway at the moment is that AMD is batting from behind, but keeping info close to their chest.  For what you say to be true, they must have developed an insanely cost-efficient way of doing RT that would have the bonus effect of potentially usurping Nvidia in the PC GPU market.

 

I don’t have that confidence.  Anything is possible, I suppose.

Link to comment
Share on other sites

also whats making the RT cores affordable for MS is the 7+ nanometer manufacturing.  

 

@AbsolutSurgen i can pull up a graph just like that for amd's performance against Intel. Doesnt have any thing to do with the current situation.  Same will happen when Big Navi is released.  Having RDNA2 in the console is big. It performs 40% more efficient then Rdna1

Link to comment
Share on other sites

22 minutes ago, TomCat said:

also whats making the RT cores affordable for MS is the 7+ nanometer manufacturing.  

 

@AbsolutSurgen i can pull up a graph just like that for amd's performance against Intel. Doesnt have any thing to do with the current situation.  Same will happen when Big Navi is released.  Having RDNA2 in the console is big. It performs 40% more efficient then Rdna1

AMD has continued to make big promises about their GPUs, and consistently failed to deliver.  I would like nothing better than for them to bring something to market that is competitive with Nvidia, as it might actually bring down GPU prices.  I'm not holding my breath.

Link to comment
Share on other sites

Going by varying performance of DXR enabled tittles on PC there is little reason to believe Microsoft has universally solved the raytracing optimization problem.  The best examples today are just as much a testiment to each dev team's own advancements.

 

In all likelihood that will also be the reality with next gen consoles (with Vulkan too).  Sure won't stop marketing from doubling down on Raytracing though, even if at the start of the gen it's not common place.

Link to comment
Share on other sites

12 hours ago, Rbk_3 said:

 I already have a PC that likely beats this. Once the 3080ti comes around it will crush it.  With that said, this is very exciting for a $500 machine. 

Yeah that $500 number is what wins the argument.  A 3080ti will probably cost about $1200 or more, and still needs a computer around it.  

Link to comment
Share on other sites

Quote

Microsoft used a similar feature on the Xbox One to resume games, but the Xbox Series X will resume multiple games from a suspended state whether you’re rebooting the console, switching games, or resuming from standby.
 

“I had to reboot because I had a system update, and then I went back to the game and went right back to it,” reveals Hryb in the podcast. “So it survives a reboot.” That will be useful for any dashboard updates that would usually interrupt any progress in a game, and it sets the stage for encouraging player habits of simply switching off a console and not worrying about save points.

Source

Link to comment
Share on other sites

On 2/25/2020 at 9:36 PM, crispy4000 said:

 

“Affordable”  ... we shall see.  We technically don’t even know their (AMD’s) approach to hardware accelerated RT.  Again, MS is beholden to that.

There not putting in some weak ass RTcores they are putting in top of the line. Everything about this machine is top of the line. They have enough over head to also offer hardware Sound raytracing with spatial audio.  then they have the luxury of using the bad chips for lockhart which has much lower specs.

Link to comment
Share on other sites

38 minutes ago, TomCat said:

There not putting in some weak ass RTcores they are putting in top of the line. Everything about this machine is top of the line. They have enough over head to also offer hardware Sound raytracing with spatial audio.  then they have the luxury of using the bad chips for lockhart which has much lower specs.

 

On 2/25/2020 at 9:43 PM, AbsolutSurgen said:

We don't even know if they have dedicated RT cores.

 

Link to comment
Share on other sites

25 minutes ago, TomCat said:

you just a doubting nancy.  Doubted the better then 2080 performance.  Boom its even better then I thought


You’re mistaking a lack of knowledge for doubt.  We have no knowledge of AMD developing their version of RT cores.  We don’t know if they exist.  Or if they do, what makes them different from Nvidia’s.  Meanwhile you’re already here claiming they’re as good or better (“top of the line”).

 

My doubts for whether the next gen consoles would match a 2080 have always been in reference to raytracing performance.  I’d love to be wrong.  Consoles and PC’s alike would see huge benefit to an AMD resurgence in competitive, capable, forward-thinking and cheap GPUs that could strip team green from pole position.  Which is what 2080 raytracing performance in a $500 console would signal.

 

But with what we know now, there’s no reason to drink your kool-aid.

Link to comment
Share on other sites

45 minutes ago, crispy4000 said:


You’re mistaking a lack of knowledge for doubt.  We have no knowledge of AMD developing their version of RT cores.  We don’t know if they exist.  Or if they do, what makes them different from Nvidia’s.  Meanwhile you’re already here claiming they’re as good or better (“top of the line”).

 

My doubts for whether the next gen consoles would match a 2080 have always been in reference to raytracing performance.  I’d love to be wrong.  Consoles and PC’s alike would see huge benefit to an AMD resurgence in competitive, capable, forward-thinking and cheap GPUs that could strip team green from pole position.  Which is what 2080 raytracing performance in a $500 console would signal.

 

But with what we know now, there’s no reason to drink your kool-aid.

i like to speculate with the knowledge i have of the situation

Link to comment
Share on other sites

@TomCat I am going to speculate on the knowledge that I have:

1)  The rumoured SeX APU die size (~405 mm^2) is not big enough to have much in the way of Ray Tracing cores (and certainly not enough to have an Nvidia style implementation)

So based on current Navi/Zen 2 architecture, die size would be (all in mm^2):

5700XT (10TF)                                   251

Add 20% Shaders to get to 12 TF      50  [251 * 20%]

Zen 2 Chiplet & Memory Module     125  (1 8-core chiplet @ 75 mm^2 plus a memory module @ 50mm^2)

Total Die (No RT Cores)                     426

2080 RT/Tensor @ 7nm   a/             160

Total Die w/ RT Cores                        586

 

So based on current NAVI/Zen2 die sizes (with no additional cores for RT), my [very] rough math gets to a die size of ~426 mm^2 -- which is roughly in line with the die size in the current rumours.  My guess is the current RTX-2080 has about HALF of its die devoted to RT/Tensor cores (and this is roughly in line with the difference in transistor counts on a 1080 vs. a 2080).

 

2)  Microsoft has been very silent on how ray tracing is being handled on SeX.  If they were doing something dramatic with Ray Tracing, I would have thought we would seen some specs now

 

 

a/  I assumed a GTX-2080 (12nm fab/2944 shaders) was essentially a GTX-1080 (16nm fab/2560) with 15% more shaders with added RT/Tensor cores, all scaled to 7nm.

  GTX 1080                                              314

   GTX 1080 with 15% more shaders  361   [314*1.15]

   GTX 1080 +15% @ 7nm                     158  [361 * 7/16]

 

   GTX 2080                                              545

   GTX 2080 @ 7nm                                 318 [545 *7/12]

Die Size of Nvidia RT Cores/Tensor Cores = 318-158=160

I recognize my math is very "high level", but I think it is good enough for illustrative purposes.

  • Thanks 1
Link to comment
Share on other sites

4 hours ago, AbsolutSurgen said:

 

1)  The rumoured SeX APU die size (~405 mm^2) is not big enough to have much in the way of Ray Tracing cores (and certainly not enough to have an Nvidia style implementation)

Digital Foundry have speculated the same.

 

Not expecting either console to have dedicated, Nvidia style, tensor cores for BVH and denoising operations, but I do think in other ways the hardware will be tuned for it.  At least to a lesser degree.

 

More speculation here, but I suspect Raytracing on consoles will be paired back in the way of:

  • Less cast rays
  • Lower quality denoising
  • Lots more temporal smoothing
  • Exclusion of many or all non-static objects
  • Reduced raytracing distances, missing objects in reflections, lower raytraced LODs
  • Less ornate geometry that would be expensive for rayracing to accurately represent.
  • Less use of "full" raytracing (shadows, ao, global illumination, and reflections).
  • Continued reliance on screen space techniques, cube maps, voxel based methods, and/or light probes/baking to make up the difference.
Link to comment
Share on other sites

7 hours ago, AbsolutSurgen said:

@TomCat I am going to speculate on the knowledge that I have:

1)  The rumoured SeX APU die size (~405 mm^2) is not big enough to have much in the way of Ray Tracing cores (and certainly not enough to have an Nvidia style implementation)

So based on current Navi/Zen 2 architecture, die size would be (all in mm^2):

5700XT (10TF)                                   251

Add 20% Shaders to get to 12 TF      50  [251 * 20%]

Zen 2 Chiplet & Memory Module     125  (1 8-core chiplet @ 75 mm^2 plus a memory module @ 50mm^2)

Total Die (No RT Cores)                     426

2080 RT/Tensor @ 7nm   a/             160

Total Die w/ RT Cores                        586

 

So based on current NAVI/Zen2 die sizes (with no additional cores for RT), my [very] rough math gets to a die size of ~426 mm^2 -- which is roughly in line with the die size in the current rumours.  My guess is the current RTX-2080 has about HALF of its die devoted to RT/Tensor cores (and this is roughly in line with the difference in transistor counts on a 1080 vs. a 2080).

 

2)  Microsoft has been very silent on how ray tracing is being handled on SeX.  If they were doing something dramatic with Ray Tracing, I would have thought we would seen some specs now

 

 

a/  I assumed a GTX-2080 (12nm fab/2944 shaders) was essentially a GTX-1080 (16nm fab/2560) with 15% more shaders with added RT/Tensor cores, all scaled to 7nm.

  GTX 1080                                              314

   GTX 1080 with 15% more shaders  361   [314*1.15]

   GTX 1080 +15% @ 7nm                     158  [361 * 7/16]

 

   GTX 2080                                              545

   GTX 2080 @ 7nm                                 318 [545 *7/12]

Die Size of Nvidia RT Cores/Tensor Cores = 318-158=160

I recognize my math is very "high level", but I think it is good enough for illustrative purposes.

nice write up but you might need to redo your math. The X will be manufactured on a 7nm+ node not the 7nm that you did your lesson on.  That means they will be able to pack another 20% more transitors into the apu with the available density allowed because of the node drop.  Why do i believe that they will be using the 7nm+ because the X will be using RDNA 2  which is a 7nm+ feature thats how they get the 40% bump in efficiency over rdna1 by shrinking the node.  Now I dont know what type of Raytracing will be in the X but what i do know is that MS helped Amd come up with their hardware solution.  MS wrote the api for Raytracing. Who to say that MS didnt like the way that nvidia did raytracing and helped Amd dev. a better way.

Link to comment
Share on other sites

1 hour ago, TomCat said:

nice write up but you might need to redo your math. The X will be manufactured on a 7nm+ node not the 7nm that you did your lesson on.  That means they will be able to pack another 20% more transitors into the apu with the available density allowed because of the node drop.  Why do i believe that they will be using the 7nm+ because the X will be using RDNA 2  which is a 7nm+ feature thats how they get the 40% bump in efficiency over rdna1 by shrinking the node.  Now I dont know what type of Raytracing will be in the X but what i do know is that MS helped Amd come up with their hardware solution.  MS wrote the api for Raytracing. Who to say that MS didnt like the way that nvidia did raytracing and helped Amd dev. a better way.

It's speculation, not a "lesson";  I am not a chip expert, merely someone guessing at what MS is doing.  I used 7nm, because all of the rumours I have read specifically say 7nm (and Zen 2 is mostly 7nm).  If you have a link to a rumour saying it is on the 7nm+ node, I would love to read it.

MS is not an expert on chip/GPU design.  They know software.

Link to comment
Share on other sites

15 minutes ago, AbsolutSurgen said:

MS is not an expert on chip/GPU design.  They know software.

All major software vendors like MS employ top tier chip experts. A lot of chip design is driven by customer needs, so having people in house to work with people at the chip makers is just standard business practice.

Link to comment
Share on other sites

5 hours ago, crispy4000 said:

 


 Looking like it’s quite plausible for a modern 4TF-ish GPU to keep pace with the XBoneX’s 6TF.  Maybe even double the FPS if the goal of a potential Lockhart is 1080p-1440p. 

 

Question remains though, would a GPU close the XBoneX’s output be enough to keep pace with the next 6+ years of advancements in shaders/lighting/FX, GPU assisted simulation + other compute tasks, and rising assets counts and costs.  

 

I suppose that’s why the developers Jason Schreier spoke to had reservations.

Link to comment
Share on other sites

20 hours ago, AbsolutSurgen said:

It's speculation, not a "lesson";  I am not a chip expert, merely someone guessing at what MS is doing.  I used 7nm, because all of the rumours I have read specifically say 7nm (and Zen 2 is mostly 7nm).  If you have a link to a rumour saying it is on the 7nm+ node, I would love to read it.

MS is not an expert on chip/GPU design.  They know software.

I came to that decision on my own based off of RDNA2  but did a quick search for a link and found this

https://www.pcgamesn.com/amd/xbox-series-x-gpu-rdna-2

 

They are basically saying the same thing i've been saying  Except they are reporting they dont know the improvements from RDNA1 to RDNA2.  But they are saying that the X will be 7nm+ becaue of RDNA2

Link to comment
Share on other sites

36 minutes ago, TomCat said:

I came to that decision on my own based off of RDNA2  but did a quick search for a link and found this

https://www.pcgamesn.com/amd/xbox-series-x-gpu-rdna-2

 

They are basically saying the same thing i've been saying  Except they are reporting they dont know the improvements from RDNA1 to RDNA2.  But they are saying that the X will be 7nm+ becaue of RDNA2

If I re-run the numbers assuming 7nm+ (using transistor density ilo nm) I get 361 mm^2 before RT, and 471mm^2 with a 2080 style RT/Tensor cores.  I doesn't change the overall point.

I can also link to tons of other articles that believe it is on 7nm.  I think that when MS announced the processor was Zen2 (7nm) ilo Zen3 (7nm+), many people assumed the APU would not move to the newer node.  The truth is, no one knows at this point.

 

I was looking at 5700XT benchmarks vs. 2070 Super this morning  -- and was surprised that at 4k that it was averaging ~10% lower frames at 4k despite the fact that it a ~10% TF advantage.  AMD is going to need all the efficiency gains it can muster for RDNA2.

Link to comment
Share on other sites

On 3/1/2020 at 3:06 PM, TomCat said:

I came to that decision on my own based off of RDNA2  but did a quick search for a link and found this

https://www.pcgamesn.com/amd/xbox-series-x-gpu-rdna-2

 

They are basically saying the same thing i've been saying  Except they are reporting they dont know the improvements from RDNA1 to RDNA2.  But they are saying that the X will be 7nm+ becaue of RDNA2

Inconsistent messaging...

Navi2x_Roadmap_Master.png

Link to comment
Share on other sites

Microsoft to detail Xbox Series X and xCloud in new live stream next week

Quote

This year’s annual Game Developers Conference (GDC) might have been postponed amid the novel coronavirus outbreak, but that’s not stopping Microsoft from streaming what it had planned for the show. The Xbox maker has revealed it’s planning to live stream a number of developer-focused keynotes, including one specifically about the Xbox Series X and Project xCloud on March 18th.

….

We’ll also hear more about DirectX ray tracing next week, too. A separate talk on “what’s new in DirectX” is scheduled for 1:20PM PT / 4:20PM ET on March 18th. Microsoft will discuss ray tracing, mesh shading techniques, and more.

 

 

Link to comment
Share on other sites

Eurogamer/Digital Foundry: Inside Xbox Series X: the full specs

Way too long to quote here.  Some of the Main specs:

1) 360mm^2 APU, built on "enhanced" 7nm (not 7nm+)

2) 8-Core Zen 2 CPU @ 3.8GHz (8-hreads) or @3.6 GHz (16-threads)

3) 12.155 teraflops of GPU compute, no tensor/RT cores

2) 16 GB of GDDR-6 (with a weird split bandwidth solution)

3) 1TB of NVME storage, running at 2.4 GB/s (a current m.2 Samsung drive runs at ~3.6GB/s, PCIe 4.0 drives will run @6.5GB/s)

Xbox Series X Xbox One X Xbox One S
CPU 8x Zen 2 Cores at 3.8GHz (3.6GHz with SMT) 8x Custom Jaguar Cores at 2.13GHz 8x Custom Jaguar Cores at 1.75GHz
GPU 12 TFLOPs, 52 CUs at 1.825GHz, Custom RDNA 2 6 TFLOPs, 40 CUs at 1.172GHz, Custom GCN + Polaris Features 1.4 TFLOPS, 12 CUs at 914MHz, Custom GCN GPU
Die Size 360.45mm2 366.94mm2 227.1mm2
Process TSMC 7nm Enhanced TSMC 16nmFF+ TSMC 16nmFF
Memory 16GB GDDR6 12GB GDDR5 8GB DDR3, 32MB ESRAM
Memory Bandwidth 10GB at 560GB/s, 6GB at 336GB/s 326GB/s 68GB/s, ESRAM at 219GB/s
Internal Storage 1TB Custom NVMe SSD 1TB HDD 1TB HDD
IO Throughput 2.4GB/s (Raw), 4.8GB/s (Compressed) 120MB/s 120MB/s
Expandable Storage 1TB Expansion Card - -
External Storage USB 3.2 HDD Support USB 3.2 HDD Support USB 3.2 HDD Support
Optical Drive 4K UHD Blu-ray Drive 4K UHD Blu-ray Drive 4K UHD Blu-ray Drive
Performance Target 4K at 60fps - up to 120fps 4K at 30fps - up to 60fps 1080p at 30fps up to 60fps

 

Lots of "yeah, we don't have the hardware specs" (e.g. Ray Tracing, SSD), but look what we've done in DirectX to offset that.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...