Jump to content

PC Community Thread


stepee

Recommended Posts

1 hour ago, atom631 said:

my Samsung Odyssey G8 monitor uses mini-hdmi and mini-display port connectors. pretty dumb. it came with a display port cable. is there any reason why I would want to get an hdmi cable instead?

In theory a 2.1 HDMI provides more bandwidth than DisplayPort 1.4, but because the 4090 DP has DSC I think either should be fine for your setup.

Link to comment
Share on other sites

Should I return this 4090? Listen to this coil whine. This video is basically from where I sit. Its also has a slight undervolt as I read it’s supposed to help (it doesnt). The GPU is on the opposite side of me.  It only happens when the GPU is being pushed and the  higher the frame rate the worse it gets.  I tried to set a global frame cap 3 fps lower than the monitor refresh rate and it didnt make much of a difference. 

 

Im not sure if Im being petty but this seems like its pretty bad. 

 

 

Link to comment
Share on other sites

3 hours ago, atom631 said:

Should I return this 4090? Listen to this coil whine. This video is basically from where I sit. Its also has a slight undervolt as I read it’s supposed to help (it doesnt). The GPU is on the opposite side of me.  It only happens when the GPU is being pushed and the  higher the frame rate the worse it gets.  I tried to set a global frame cap 3 fps lower than the monitor refresh rate and it didnt make much of a difference. 

 

Im not sure if Im being petty but this seems like its pretty bad. 

 

 

 

Couldn't really make it out in the video.

 

That being said, my last two Nvidia cards have had coil whine when run at maximum power and then got louder as framerates got higher. It's also much more noticeable the quieter you make your system. So the fact yours is quiet with minimal fans and the card is basically in open air with the perforation on the side panel makes the coil whine easier to hear than it would be another system.

 

I have no idea which models of 4090s have good or bad coil whine, but I would expect a high end one from Asus, MSI, or Gigabyte to probably be quieter than the Founder's Edition.

Link to comment
Share on other sites

5 hours ago, atom631 said:

Should I return this 4090? Listen to this coil whine. This video is basically from where I sit. Its also has a slight undervolt as I read it’s supposed to help (it doesnt). The GPU is on the opposite side of me.  It only happens when the GPU is being pushed and the  higher the frame rate the worse it gets.  I tried to set a global frame cap 3 fps lower than the monitor refresh rate and it didnt make much of a difference. 

 

Im not sure if Im being petty but this seems like its pretty bad. 

 

 

 

I think I could hear something in that video, but it's a little hard to say; I'm not sure if the mic is picking it up properly. I definitely believe you that it's more annoying in person though; that would drive me crazy.

 

Also, am I seeing correctly that you're also (still?) rocking some Klipsch ProMedia speakers? I've still got my 4.1s from like 2003. :p 

  • Like 1
Link to comment
Share on other sites

6 hours ago, atom631 said:

Should I return this 4090? Listen to this coil whine. This video is basically from where I sit. Its also has a slight undervolt as I read it’s supposed to help (it doesnt). The GPU is on the opposite side of me.  It only happens when the GPU is being pushed and the  higher the frame rate the worse it gets.  I tried to set a global frame cap 3 fps lower than the monitor refresh rate and it didnt make much of a difference. 

 

Im not sure if Im being petty but this seems like its pretty bad. 

 

 


Coil whine is luck of the draw, unfortunately. There’s no real “this model has it less” that’s not completely anecdotal, and that goes for all brands. If it’s really bad you can try to RMA (but I wouldn’t note coil whine as being the only reason).

Link to comment
Share on other sites

Just now, AbsolutSurgen said:

 

 

The suspicion is that this is going official driver level from Nvidia right? I’ll wait for that but as someone who is over SDR and amazed how common it still is, this is good stuff. I like Special K and only just discovered it’s hdr10 level which makes it easier, but less steps the better.

Link to comment
Share on other sites

1 hour ago, stepee said:

 

The suspicion is that this is going official driver level from Nvidia right? I’ll wait for that but as someone who is over SDR and amazed how common it still is, this is good stuff. I like Special K and only just discovered it’s hdr10 level which makes it easier, but less steps the better.

It is in the driver, just not turned on.  There is a mod that Alex uses, that allows you to force it on for any game.

  • Hype 1
Link to comment
Share on other sites

I’ve noticed a lot of devs seem to be opting for fsr1 on console over fsr2 - I think because you get less fizzle. I don’t blame them, I think I’m done ever even trying FSR2 if there is any other option. The artifacts are just ridiculous with it. Once you know what fsr2 fizzle looks like it’s hard not to spot it a mile away. They need to go back to the drawing board on that one. FSR3 frame gen seems much better in that by itself it doesn’t destroy the image but it needs to be paired with a different upscale or else you get like double fizzle.

Link to comment
Share on other sites

On 2/16/2024 at 2:00 AM, Nokra said:

 

Also, am I seeing correctly that you're also (still?) rocking some Klipsch ProMedia speakers? I've still got my 4.1s from like 2003. :p 

 

youre damn right! these speakers still sound absolutely amazing. I just have the 2.1 and I will run these speakers until they give up the ghost. 

 

Im surprised no one hears the whine. it sounds like my pc is getting a haircut. lol. its not terrible when theres lots of sound, but quiet scenes are obnoxious. I have like 50+ days left to return this card. im gonna try to get another one and see if its better. 

  • Like 1
  • Hugs 1
Link to comment
Share on other sites

WWW.XDA-DEVELOPERS.COM

I was able to create a more compact gaming PC that matched my needs
Quote

If you wish to create a gaming PC with a compact chassis to resemble a games console, an APU is the way to go. Using one such as the 8700G would allow you to use a smaller case, fewer fans to keep everything cool, and this would make for a quieter gaming environment. Eight cores are present on the 8700G with the ability to boost up to 5.1 GHz. The AMD Radeon 780M has 12 cores at 2.9 GHz. All of this is on a package with a TDP of just 65W. In my testing, 1440p gaming was possible but my main rig is connected to a 49-inch ultrawide 5K2K (5120 x 1440p) panel.

 

Link to comment
Share on other sites

9 hours ago, stepee said:

I’ve noticed a lot of devs seem to be opting for fsr1 on console over fsr2 - I think because you get less fizzle. I don’t blame them, I think I’m done ever even trying FSR2 if there is any other option. The artifacts are just ridiculous with it. Once you know what fsr2 fizzle looks like it’s hard not to spot it a mile away. They need to go back to the drawing board on that one. FSR3 frame gen seems much better in that by itself it doesn’t destroy the image but it needs to be paired with a different upscale or else you get like double fizzle.


FSR1 is less data intensive so it gets better framerates. At 4k the difference between 1 & 2 isn’t as noticeable and 1 allegedly requires less “tweaking” than 2.

Link to comment
Share on other sites

So Ive been able to really quiet down the coil whine by capping the global framerate setting in the nvidia control panel to 3 FPS below my monitor refresh rate. I also enabled Vsync  globally. Im not sure though if I should set it to just "on" or "fast". i also have gysnc enabled. does having both gsync and vsync enabled gimp performance in anyway? also, Is there any reason why i wouldnt want to cap my framerate? If my refresh rate on my monitor is 175, why would i even want a framerate higher than its capable of? wont that create screen tearing? or does Gsync solve that issue?

 

I also have an undervolt setup in msi afterburner. using 2750mhz / .950v. Power limit capped at 80% and memory clock set to +1300. honestly, no idea if this is a proper undervolt setup or not. any ideas? any way to improve it? 

 

on a side note, i also setup an undervolt on my CPU with all cores at PBO -30 and it has lowered the temps on the CPU by like 4-5 degrees and my timespy score went up over 300 points. pretty easy and quick performance boost. 

Link to comment
Share on other sites

4 minutes ago, atom631 said:

So Ive been able to really quiet down the coil whine by capping the global framerate setting in the nvidia control panel to 3 FPS below my monitor refresh rate. I also enabled Vsync  globally. Im not sure though if I should set it to just "on" or "fast". i also have gysnc enabled. does having both gsync and vsync enabled gimp performance in anyway? also, Is there any reason why i wouldnt want to cap my framerate? If my refresh rate on my monitor is 175, why would i even want a framerate higher than its capable of? wont that create screen tearing? or does Gsync solve that issue?

 

I also have an undervolt setup in msi afterburner. using 2750mhz / .950v. Power limit capped at 80% and memory clock set to +1300. honestly, no idea if this is a proper undervolt setup or not. any ideas? any way to improve it? 

 

on a side note, i also setup an undervolt on my CPU with all cores at PBO -30 and it has lowered the temps on the CPU by like 4-5 degrees and my timespy score went up over 300 points. pretty easy and quick performance boost. 

 

 

Framerates above your max refresh rate only matter if you do competitive online games, and even then the benefit is just some milliseconds of lower input lag. 

 

G-Sync should turn off if your framerate does happen to exceed your max refresh rate. Setting a cap or forcing Vsync should keep this from happening. What works best can vary from game to game though. There's only a performance hit to having both Gsync and Vsync on at the same time if Vsync is also acting like a framerate cap.

 

The difference between Vsync "On" and "fast" is how it's buffered. On should be typical triple buffering, and this will mean if the game isn't demanding enough you should get lower GPU usage and power consumption as it's rendering on an as needed basis with what the display can handle. Fast keeps the GPU working at full speed and will render as many frames as it can, and will then only send the latest frame to the display when the display is ready. Fast should give lower input lag at the expense of power consumption.

 

I don't know what the 4090 can handle as an undervolt, but 80% power cap can be a huge performance hit.

 

With a PBO of -30, you may notice instability when your system is idle or doing less demanding tasks. A weaker core will crash its threads as it boosts to up and down its frequency range. You also run the risk of corrupting your Windows installation since it always has lightly-threaded workers going in the background.  This is a decent program to use to see if you are truly stable:

 

GITHUB.COM

Stability test script for PBO & Curve Optimizer stability testing on AMD Ryzen processors - sp00n/corecycler

 

  • Like 1
Link to comment
Share on other sites

48 minutes ago, cusideabelincoln said:

 

 

Framerates above your max refresh rate only matter if you do competitive online games, and even then the benefit is just some milliseconds of lower input lag. 

 

G-Sync should turn off if your framerate does happen to exceed your max refresh rate. Setting a cap or forcing Vsync should keep this from happening. What works best can vary from game to game though. There's only a performance hit to having both Gsync and Vsync on at the same time if Vsync is also acting like a framerate cap.

 

The difference between Vsync "On" and "fast" is how it's buffered. On should be typical triple buffering, and this will mean if the game isn't demanding enough you should get lower GPU usage and power consumption as it's rendering on an as needed basis with what the display can handle. Fast keeps the GPU working at full speed and will render as many frames as it can, and will then only send the latest frame to the display when the display is ready. Fast should give lower input lag at the expense of power consumption.

 

I don't know what the 4090 can handle as an undervolt, but 80% power cap can be a huge performance hit.

 

With a PBO of -30, you may notice instability when your system is idle or doing less demanding tasks. A weaker core will crash its threads as it boosts to up and down its frequency range. You also run the risk of corrupting your Windows installation since it always has lightly-threaded workers going in the background.  This is a decent program to use to see if you are truly stable:

 

GITHUB.COM

Stability test script for PBO & Curve Optimizer stability testing on AMD Ryzen processors - sp00n/corecycler

 

 

 

thanks!! Im going to change the power cap back to 100% on the 4090 and see if it makes a difference with the coil whine. i really think the frame cap and vsync made the most difference.  its much quieter now. 

 

as for corecycler - so i launch "run corecylcer.bat" and according to the FAQ , i should technically run this for 96hrs (12hrs x 8 cores)? can i use the system while this running? or just let it go for 4 days? 

Link to comment
Share on other sites

13 hours ago, atom631 said:

 

 

thanks!! Im going to change the power cap back to 100% on the 4090 and see if it makes a difference with the coil whine. i really think the frame cap and vsync made the most difference.  its much quieter now. 

 

as for corecycler - so i launch "run corecylcer.bat" and according to the FAQ , i should technically run this for 96hrs (12hrs x 8 cores)? can i use the system while this running? or just let it go for 4 days? 

 

You can use it while it runs, but if you're gaming or doing other high-CPU loads then let it run longer than 96 hrs.

Link to comment
Share on other sites

2 hours ago, atom631 said:

@cusideabelincoln so far its been running around 17hrs. As far as I can tell, it hasnt thrown an error code. Its currently on iteration 706 as I type this. Im assuming if it throws a code, it halts the script? 

 

I think the default setting is that it will keep running the test on each core that doesn't fail. At the start/end of each cycle it will tell you which cores it's testing and which have thrown an error.

 

You can change the behavior through its config file (which does have helpful comments to tell you what the settings do). Did it default to AVX? If so and you end up passing the AVX test, I would recommend testing using SSE. When I was tweaking PBO, SSE would throw more errors than AVX.

Link to comment
Share on other sites

1 hour ago, cusideabelincoln said:

 

I think the default setting is that it will keep running the test on each core that doesn't fail. At the start/end of each cycle it will tell you which cores it's testing and which have thrown an error.

 

You can change the behavior through its config file (which does have helpful comments to tell you what the settings do). Did it default to AVX? If so and you end up passing the AVX test, I would recommend testing using SSE. When I was tweaking PBO, SSE would throw more errors than AVX.

 

 

so looking through the config.ini files, i see these 3 settings: 

 

First one: 

# The program to perform the actual stress test
# The following programs are available:
# - PRIME95
# - AIDA64
# - YCRUNCHER
# You can change the test mode for each program in the relavant [sections] below.
# Note: For AIDA64, you need to manually download and extract the portable ENGINEER version and put it
#       in the /test_programs/aida64/ folder
# Note: AIDA64 is somewhat sketchy as well
# Default: PRIME95
stressTestProgram = PRIME95

 

 

Second One:

# Stop the whole testing process if an error occurred
# If set to 0 (default), the stress test programm will be restarted when an error
# occurs and the core that caused the error will be skipped in the next iteration
# Default: 0
stopOnError = 0

 

 

Third one:

# The test modes for Prime95
# SSE:    lightest load on the processor, lowest temperatures, highest boost clock
# AVX:    medium load on the processor, medium temperatures, medium boost clock
# AVX2:   heavy load on the processor, highest temperatures, lowest boost clock
# AVX512: only available for certain CPUs (Ryzen 7000, some Intel Alder Lake, etc)
# CUSTOM: you can define your own settings for Prime. See the "customs" section further below
# Default: SSE
mode = SSE
 

 

If Im reading this correctly, its running Prime95 in SSE mode. And if a core throws an error, it will restart the testing. Since Im seeing Im now up to iteration 877, Im guessing so far I am error free.

 

The only thing that seems weird is the core test seems very short. I believe its only set to 10 seconds by default before it moves to the next core. Is that enough time ?  If not, and I edit config.ini... I have to restart the test right? 

Link to comment
Share on other sites

14 minutes ago, atom631 said:

 

 

so looking through the config.ini files, i see these 3 settings: 

 

First one: 

# The program to perform the actual stress test
# The following programs are available:
# - PRIME95
# - AIDA64
# - YCRUNCHER
# You can change the test mode for each program in the relavant [sections] below.
# Note: For AIDA64, you need to manually download and extract the portable ENGINEER version and put it
#       in the /test_programs/aida64/ folder
# Note: AIDA64 is somewhat sketchy as well
# Default: PRIME95
stressTestProgram = PRIME95

 

 

Second One:

# Stop the whole testing process if an error occurred
# If set to 0 (default), the stress test programm will be restarted when an error
# occurs and the core that caused the error will be skipped in the next iteration
# Default: 0
stopOnError = 0

 

 

Third one:

# The test modes for Prime95
# SSE:    lightest load on the processor, lowest temperatures, highest boost clock
# AVX:    medium load on the processor, medium temperatures, medium boost clock
# AVX2:   heavy load on the processor, highest temperatures, lowest boost clock
# AVX512: only available for certain CPUs (Ryzen 7000, some Intel Alder Lake, etc)
# CUSTOM: you can define your own settings for Prime. See the "customs" section further below
# Default: SSE
mode = SSE
 

 

If Im reading this correctly, its running Prime95 in SSE mode. And if a core throws an error, it will restart the testing. Since Im seeing Im not up to iteration 877, Im guessing so far I am error free.

 

The only thing that seems weird is the core test seems very short. I believe its only set to 10 seconds by default before it moves to the next core. Is that enough time ? 

 

You'll have to watch the command prompt after it completes an iteration to see if any core throws an error. When an iteration ends (which is after it tests each core once) it will have a line that says if any core has thrown an error.  There's also a log file you can check that has everything displayed in the command prompt.

 

The default time is fine. The goal of the program is to force the highest clocks on the PBO curve. The highest clocks only happen when the core is at lower temperatures and power consumption. A longer test will produce more heat which will force the core to clock lower. It is good to test higher temp and power consumption stability, but that's not the goal of this particular one.

 

This is what you'll see if there's an error:

 

Spoiler

18:09:51 - Completed the test on Core 11 (CPU 22)
The following cores have thrown an error: 3
                 + ----------------------------------
                 + Iteration complete
                 + ----------------------------------

18:09:51 - Iteration 2
----------------------------------
                 + Alternating test order selected, building the test array...
                 + The final test order:  0, 6, 1, 7, 2, 8, 3, 9, 4, 10, 5, 11
                 + Still available cores: 0, 6, 1, 7, 2, 8, 3, 9, 4, 10, 5, 11
                 + The selected core to test: 0
18:09:51 - Set to Core 0 (CPU 0)
                 + Setting the affinity to 1
                 + Successfully set the affinity to 1
           Running for 10 seconds...
                 + 
                 + 18:09:51 - Tick 1 of max 1
                 +            Remaining max runtime: 10s
                 +            The remaining run time (8) is less than the tick interval (10), this will be the last interval
                 + 18:09:59 - Suspending the stress test process for 1000 milliseconds
                 +            Suspended: True
                 +            Resuming the stress test process
                 +            Resumed: True
                 + 18:10:01 - Getting new log file entries
                 +            Getting new log entries starting at position 316 / Line 9
                 +            The new log file entries:
                 +            - [Line 10] Self-test 11520K passed!
                 +            New file position: 342 / Line 10
                 + 18:10:01 - Checking CPU usage: 4.1%
                 + One last error check before finishing this core
                 + 18:10:02 - Checking CPU usage: 4.1%
18:10:03 - Completed the test on Core 0 (CPU 0)
                 + Still available cores: 6, 1, 7, 2, 8, 3, 9, 4, 10, 5, 11
                 + The selected core to test: 6

 

Link to comment
Share on other sites

12 minutes ago, cusideabelincoln said:

Nvidia is finally moving past Y2K software

 

 

No more forced login

No more absurdly slow UI interfaces

 

I'd rather them stick with the current NCP UI. It's simple, it works, it's not some bloated POS like AMD and Intel's (and GFE for that matter, but at least that's optional).

Link to comment
Share on other sites

15 hours ago, atom631 said:

@Spork3245 so I wound up scoring a 77" G3 for the price i wanted to pay. lol. back goes the S89c. 

 

now i need to make the decision what to do with this whiny 4090. Its really driving me crazy. 

 

Geeze, how much did you pay and where did you get it from? MLA is the real deal from what I've seen. Really fixes the crushed grays I constantly see on most OLEDs.

 

With the coil whine, only fix will be to RMA/exchange it - it's completely random and no brand or model is better than others.

Link to comment
Share on other sites

On 2/17/2024 at 4:32 PM, stepee said:

 

The suspicion is that this is going official driver level from Nvidia right? I’ll wait for that but as someone who is over SDR and amazed how common it still is, this is good stuff. I like Special K and only just discovered it’s hdr10 level which makes it easier, but less steps the better.

It's in the new Nvidia app beta.

  • Shocked 1
Link to comment
Share on other sites

10 minutes ago, Keyser_Soze said:

I've never had an issue with logging into the program but a lot of that does sound nice. Hopefully it clarifies a lot of the settings (such as G-Sync and V-Sync actually needing to both be on)

I've never understood the reluctance to log into Geforce.

I've played with the app a little and it seems to be a big improvement -- super snappy, and has all the settings in one place.

  • Halal 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...