Jump to content

Alternative to DLSS for PS5 was the subject of a recent Patent


Recommended Posts

While I can't tell you if or how Sony would do this on PS5, there are some things to bear in mind.

 

1. While specialized hardware like Nvidia's tensor cores are awesome, GPUs were used to to accelerate training and "inference" of ML models long before Nvidia made special hardware for it because they're simply better at matrix multiplies and number crunching. Nvidia ultimately made specialized hardware because the deep learning market for GPUs was already there before they did.

 

2. The reason Nvidia dominates the market for deep learning work is mostly because they have really great APIs and drivers available for it, with their recent hardware specialization being a new bonus. That is, because they wrote useful software that makes it easier and does a lot of the nasty low-level optimization work for you. But the thing about software is anyone can write it.

  • Upvote 2
Link to post
Share on other sites

DF made it clear that we shouldn't be expecting the next gen consoles to come close to even a RTX 2060's ML capabilities.

unknown.png

 

So while I'm sure a DLSS-like solution could be developed, it might not be the most efficient way forward for consoles.  Maybe we see a reconstruction technique that also incorporates ML for adaptive sharpening.

I expect some kind of approximation of DLSS 2.0 from the next-gen consoles eventually, without getting into 'better than native 4K' or '540p upscaling > 1080p' territory.   The next mid-cycle systems could be perfectly capable though.

  • Upvote 1
Link to post
Share on other sites

Agreed, I see an alternative DLSS only for mid-gen refresh. That won't stop them from putting it on the box if they can even get it to work at all! :p

Link to post
Share on other sites
1 hour ago, Rick2101 said:

Could this be for the mid-gen refresh? Ps5.5?

 

Hmm... it's possible. PS5 Pro could, theoretically, deliver high-framerate 4k, or possibly (normal framerate) 8k through whatever DLSS-like method this may be.

Link to post
Share on other sites

Not sure why this would have to wait for a mid cycle hardware refresh.

 

If this intelligent upscaling (or whatever you want to call it) can produce better results at a lower computational cost than either running natively at high resolution or existing techniques like checkerboarding, it could be implemented on existing hardware.

 

As @legend points out, GPUs are already pretty good at AI/ML workloads, so there's nothing that I can see indicating that special hardware is required for this kind of task.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...