Jump to content
Synapt

Performance Curiousity

Recommended Posts

Curious to get some experience answers from those of you who have played GTA IV on an older box and then upgraded, as I'm finding GTA IV actually played notably smoother on my old system than it does this new box, curious if anyone else has found a similar after upgrading pretty heavily and finding the game runs kinda chunky, even on lower settings than the old box had.

And since I'm sure some one will ask just how big of a difference in my case;

Old box:

WinXP 32-bit

E8400 (Clocked normally at 3.6GHz most of the 4 years I had it)

4GB RAM

9800GTX+ 512MB (Originally), 460 GTX 1GB (For close to the past year)

New box (Built roughly a few weeks ago):

Win7 64-bit

i5 2500K (Stock speed of 3.3GHz at load)

16GB RAM

460 GTX (Same from previous box)

Now, you'd think the game would smoke pretty well even at near-max settings (factoring in the VRAM limiting the draw distance to around 75~ when everything is on 'High'), which it did pretty well even on my old E8400 box, yet on this one, I get pretty noticable drag oddly, though it really only seems to be this game, other (higher-end) games tested perform pretty incredibly.

Anyone else ever run into an experience like this with GTA IV on a higher end design?

Share this post


Link to post
Share on other sites

My old box could have NEVER run IV, pent 3....512mb ram MAXED OUT crappy on board gfx and no option to upgrade as it had no PCIe slots and it was running XP pro

Share this post


Link to post
Share on other sites

Ah yeah, lol, that would be a little under GTA's specs.

Share this post


Link to post
Share on other sites

however...my new(er) PC had integrated ATI specs, assuming it would run better than my laptop, it was actually worse, I threw in a 300watt PSU, then a Nvida card in, works great! you can see the difference in my older GTA vids and my stuff after my EVOC video

Share this post


Link to post
Share on other sites

I'd suggest being massively careful with the 300w PSU, most current nVidia cards spec that you should have at -minimum- a 450-550w or higher depending if it's lower or higher end (that's just factoring in all other system overhead on average power pulls, but 300 could be cutting it pretty rough, end up underpulling to one hardware item or another).

Share this post


Link to post
Share on other sites

I created a topic similar to this (located here ) a while back to gauge my system (see signature) to what others experienced.

I love my high performance computer as well as car mods so I am willing to live with 30 minutes of game play on LCPDFR. That being said, I think a lot of it is that our machines are too advanced for the poor PC port of GTA-IV. The mods (high poly cars, LCPDFR, and others) do have something to do with it as well, meaning that it will inevitably crash because it's running things it wasn't designed to do.

Edited by Ofc.Chad

Share this post


Link to post
Share on other sites

I only have one modified car I put in mine so far, and it's a noose model replacement at that, so it wasn't even being 'used' outside of my vehicle at the time.

Edit:

Asuna: Yeah, 300 min, 200 series cards were mostly rebranded/upgraded 8 and 9 series things, but 300w is still pushing it a bit depending what else you got.

Edited by Synapt

Share this post


Link to post
Share on other sites

try to upgrade to a 570 GTX OR An ATI 7770 I Have 6 gigs of ram and a amd athlon and i run on high settings smoothly

Share this post


Link to post
Share on other sites

Dude if my 9800GTX+ on an E8400 ran things fine, plus the same card on the same old box did, pretty sure it's not my card, lol.

The 400 series were the first major stepping release from nVidia in awhile, using the Fermi core, in the grand scheme of things, the 500 series isn't exactly a dramatic difference from my 460, lol.

"amd athlon" doesn't really tell alot either, as there are multiple types of athlon release models (Athlon XP, 64, X2, II).

Main thing I'm noticing that GTA IV seems to tax all the cores of my CPU to their max, which is odd as it never really did that on my E8400 even, I'm wondering if it's not quite quad-core optimized, I might try tweaking the core affinity some, reduce it to just 2, perhaps it's so unoptimized (which we all know GTA IV kinda is in the long run) that it's actually creating it's own bottleneck in trying to use more than 2 cores...

Share this post


Link to post
Share on other sites

New box (Built roughly a few weeks ago):

Win7 64-bit

i5 2500K (Stock speed of 3.3GHz at load)

16GB RAM

460 GTX (Same from previous box)

Anyone else ever run into an experience like this with GTA IV on a higher end design?

Your current CPU is the bottleneck in your entire rig, next to the GPU. GTA IV was a poor console port, and so does not fully utilize all the hardware capabilities of additonal GPUs (SLI/Crossfire) and additional VRAM (only DRAM). Thus, the performance of the game is dependent primarily on your CPU, and then your GPU. If you want to see a high increase in performance quickly without having to buy a new GPU, I suggest you OC your CPU from the stock 3.30GHz to at least 4.00GHz, if not 4.50GHz. The K-series chips are meant for OCing, so they'll take some upping of frequencies and voltages without a hitch. You should see some major improvements once your CPU is at least up to speed. That is the misconception that most people have with this game--that a better GPU leads to better performance. That's wrong, and in most cases, it can be addressed by improving the performance of the CPU through various manipulations.

Lastly, performance also depends on the version of GTA IV you have installed. 1.0.4.0 will give you great performance with great graphics, while 1.0.7.0 will give you crappy performance with "ilovepoop" (quoted from another user) graphics. This is what I've come to find after doing a myriad of performance tests with games and several of my different rigs. I get 80 FPS while playing on 1.0.4.0. With iCEnhancer, I get only 40 FPS. It decreases performance, but I think it's worth it. Also wothy of noting, 1.0.7.0 only does 40 FPS at best...1.0.4.0 does 80 FPS all the time...that alone tells you why 1.0.4.0 is better. However, having fancy eye-candy is not why I use 1.0.4.0; I can do without iCEnhancer. 1.0.4.0 is simply the best patch in terms of performance and stability. 1.0.7.0 crashes all the time and is not recommended. I just installed it since I was trying out 1.0.4.0 versus 1.0.7.0...never want to go back to 1.0.7.0 again.

For your reference, here is what I have in my current rig:

Core i5 2500-K @ 4.5GHz 1.336v

EVGA GTX 580 SC

16GB Corsair Vengeance DDR3 RAM 9-9-9-24 1.5v

ASUS P8P67 Pro

Edited by Raggio

Share this post


Link to post
Share on other sites

Explain exactly how, a processor on a by far better core design (Sandy Bridge) is even remotely a bottleneck compared to a Core 2 Duo on the Wolfdale model?

I never said a 'Better GPU means better performance', at all, at any point. What I did say, is that this exact same card performed without any FPS loss at all on by all means a far inferior system (4GB RAM tops, in a 32-bit environment, even with GTA's LAA, factoring in the memory overhead of my GPU, I should have by far been bottlenecked more severely on the old box).

The entire sandy bridge model in general will still outperform the abilities of of my old E8400, even considering I had that overclocked. I should not have to OC/Turbo this chip at all to even remotely match it, as it natively would surpass it.

Share this post


Link to post
Share on other sites

Explain exactly how, a processor on a by far better core design (Sandy Bridge) is even remotely a bottleneck compared to a Core 2 Duo on the Wolfdale model?

It's not the technological aspects of the two CPUs that are bottlenecking the game per se; rather, it is the poorly written state of the game that is causing the bottlenecks. Generally, the better the technology, the more room for OCing and less occurance of problems, so that's why gamers recommend OCing the i5 2500K to atleast 4.00GHz. Even though the Core i5 Sandy Bridge processor is a quad core and doesn't compare up to the Core 2 Duo Wolfdale, there will still be bottlenecks if the game was ported poorly. I'm not an electrical engineer and haven't looked at and compared their designs, but as a computer and OC guru, I can say that OCing the Core i5 2500K to the same speed as your Core 2 Duo's speed should bring their performance up to par with each other. Additional frequency increase up to and past 4.00GHz should bring about major improvements. Since every computer and its hardware are different, I cannot say for certian by how much it will increase. But I can definitely assure that you'll see an increase in performance. As many OCers will tell you, high-end GPUs are always bottlenecked by CPUs that are not clocked to 4.00GHz or higher, and will recommend 4.20GHz for the i5 2500K. I have seen an increase in performance of my GPU simply by OCing my CPU, so I can attest to that. However, I am unable to explain the principles behind why the design matters, as I've already said. Hopefully, someone else can explain that.

I never said a 'Better GPU means better performance', at all, at any point. What I did say, is that this exact same card performed without any FPS loss at all on by all means a far inferior system (4GB RAM tops, in a 32-bit environment, even with GTA's LAA, factoring in the memory overhead of my GPU, I should have by far been bottlenecked more severely on the old box).

And I never said you said that either. I'm stating that that's the common misconception that most people appear to have, so that others who are also reading this thread have an idea what's causing bottlenecks to their system and think about what to actually look for. You also have the misunderstanding that more VRAM and a 32-bit OS system is bottlenecking GTA IV. This is still not the case. As I've already stated, GTA IV is a poor console port, and therefore does not fully utilize hardware. In addition, most PC games don't use more than 2GB of RAM anyways. What GTA IV does use instead is VRAM, which my GTX 580 only has 1.5GB of, and it currently uses 1.2GB of that at max. I've tried allowing all 1.5GB to be used via tweaking the game's settings, but no dice. It appears that I'll have to get a GPU with more VRAM for it to have more actual RAM to use, since it does not use any, if at all, of the other 16GB of DRAM. Furthermore, no game that I know of at the moment is written entirely for a 64-bit OS. I only have programs like Photoshop that specifically have 64-bit versions that actually utilize over 3.5GB of the available DRAM. Therefore, all other programs that are written for a 32-bit OS will not even use near 4GB of DRAM, simply because they cannot. Therefore, what's left to blame is most likely the CPU and any software/BIOS settings that's related to it. This then will explain why your GPU faces more of a performance drop that it does with your new rig that supposedly has less room for bottlenecks than your former rig.

The entire sandy bridge model in general will still outperform the abilities of of my old E8400, even considering I had that overclocked. I should not have to OC/Turbo this chip at all to even remotely match it, as it natively would surpass it.

In theory, it should surpass it, and it may also do so for some of your other software. However, not every software is written in such a way that it fully utilizes new hardware that comes out. In some instances that I've seen, programs may be able to run better on dated OS and hardware than they do on newer systems, depending on what they were written for and when. Yes, the hardware is better, but don't assume that it can outperform everything that the former ran. That's where the trouble with deciding whether or not to buy new parts comes in, and where OCing becomes fun.

Hope you find all this information helpful. Building computers, troubleshooting and fixing problems with computers, and OCing are my hobbies, in addition to making mods for GTA.

Edited by Raggio

Share this post


Link to post
Share on other sites

I'm well aware of what a lot of 'overclockers' will claim, but much like you, I've been dealing in comps for a long time and my experience tends to diverge heavily compared to what most people seem to believe (in that you need extra CPU clock to 'handle' a higher end GPU).

And I didn't say the 32-bit + VRAM was bottlenecking, I said it -should- bottleneck far more than this system would (While PAE can give access to the full 4GB of RAM through virtual routing of the CPU, it in itself can be slight overhead in aspects like this).

And while it's -rare- for GTA IV to hit over 2GB of system RAM, it is far from impossible (right this very moment, the game is using 1.6GB exactly in system RAM, and 718MB of the GPU's available RAM). GTA IV is still however, a 32-bit application compile, and even with LAA, will max out at 3GB actual memory space, though I'm not concerned about hitting that really.

You really seem to just be quoting me on stuff I've already pointed out, dunno why, not sure if you're just ignoring half of what I'm saying or breezing through it without understanding, lol.

Long story short, my theory was correct after all, adjusting the affinity to use only two cores solved everything, performance is working pretty much as it should be and I'm no longer having odd rendering bugs as well (eg; alt tabbing out and back in cause the entire world to de-render).

Edit: Though I now seem to be flintstone'ing it, as my feet are protruding through the bottom of the vehicle most of the time, lol

Edited by Synapt

Share this post


Link to post
Share on other sites

You really seem to just be quoting me on stuff I've already pointed out, dunno why, not sure if you're just ignoring half of what I'm saying or breezing through it without understanding, lol.

I'm quoting on specific parts that you're talking about then giving examples as to why it does or does not bottleneck. I know you said should, and indeed I replied with either a confirmation or a disagreement, as well as reasons why. I'm not ignoring anything, as I've quoted you part by part and tried answering with examples. I just don't have an answer for why your Sandy Bridge processor gives you less performance than your Wolfdale processor other than GTA IV being a poorly coded game and needing settings to be changed, which I've read that you've done. As long as you have it running the way you like it, great! I'm just also letting you and other people here who are interested know what OCing can do, since not everyone are OCers or experienced computer gurus like us. ;)

Long story short, my theory was correct after all, adjusting the affinity to use only two cores solved everything, performance is working pretty much as it should be and I'm no longer having odd rendering bugs as well (eg; alt tabbing out and back in cause the entire world to de-render).

Interesting. I've read a few articles, including this one here, where using two cores instead of four on a quad core actually decreases performance, but that doesn't involve setting any core affinities to force the program to use certain cores only. Those that I've read mainly dealt with hyper threading, so there you go. As long as it's working for ya! :)

Edit: Though I now seem to be flintstone'ing it, as my feet are protruding through the bottom of the vehicle most of the time, lol

LOL! Are you using any vehicle and/or ped mods by any chance? I've only seen this happen to the dog ped mod that I've installed (his lower body and tail protrudes out of under the car, LOL).

Edited by Raggio

Share this post


Link to post
Share on other sites

I think this discussion is quite helpful, especially to those who are thinking of hardware upgrades and the likes of that in order to attempt to run the latest and greatest games. Hopefully more users will come across this article, like I have, and find it useful as well. Nice to see some constructive discussions going on here, as well as some possible problems and solutions!

You really seem to just be quoting me on stuff I've already pointed out, dunno why, not sure if you're just ignoring half of what I'm saying or breezing through it without understanding, lol.

I kind of see it a different way. I see that you are saying how in theory, your graphics card should not drop in performance when moving from your "old box" to the "new box" since the old one has more room for bottlenecks than the new one, then Raggio explained why it would cause bottlenecks for those instances that you mentioned. So in my opinion, I feel that he didn't ignore or breeze through what you've said and asked, but instead demonstrated, with reasoning, the things that you are pointing out, and why changing those would or wouldn't work. I'm also a computer guru just like you guys, and understand that GTA IV is a mess of a port as far as that's concerned.

Interesting. I've read a few articles, including this one here, where using two cores instead of four on a quad core actually decreases performance, but that doesn't involve setting any core affinities to force the program to use certain cores only. Those that I've read mainly dealt with hyper threading, so there you go. As long as it's working for ya! :)

I'm also quite curious about this as well. So Synapt, changing the affinity brought the performance back up to par with your old processor? If you found a way to improve performance like that, not to mention solve graphical issues, then I think it is worth noting it, since not many people will know about that to solve bottlenecks, let alone know about setting affinities for the processors!

Edited by Guru Nathan

Share this post


Link to post
Share on other sites

Neg, just a noose override model, for some reason the bottom parts of my feet/heel don't collide with the inside properly, admittedly I can't recall if this has happened with this model in the past, this is the first time in nearly 4-5 months since I even played.

And the one thing worth remembering is, hyperthreading != Cores. i5's generally have no HT, and while i7's have HT, HT's can sometimes give better performance than a true extra core since they're just an extra thread on the core itself, where as actually trying to utilize multiple physical cores, can be an issue.

The reason I think resolving the affinity to 2 cores is working is because like I originally noted, when using all 4 cores, GTA IV was effectively maxing out the loads, and I mean literally pushing 100% total for -it alone-, my guess is chances are the system was kind of trying to 'fight' to offload some performance to other applications in the background (such as Firewall, Antivirus), and that's where the performance drag was. I could probably put it at three cores and be okay even really, since the things I got in the background definitely could utilize a single core all on their own.

Share this post


Link to post
Share on other sites

I'm also quite curious about this as well. So Synapt, changing the affinity brought the performance back up to par with your old processor? Did you also mess around with hyperthreading as well? If you found a way to improve performance like that, not to mention solve graphical issues, then I think it is worth noting it, since not many people will know about that to solve bottlenecks, let alone know about setting affinities for the processors!

As noted in my other post, i5's have no HT, so neg.

And to be honest the whole "Set core affinity" fix has been around for many games since the early 2000's.

Speaking as one of the guys from the America's Army Game project, we had to for many years before our 2.7.0 release, tell people with more than 2 cores, to set their affinity to only two (though back when the game was still pre-2.7, a lot of people had the HT on Pentiums, which was horrible compared to the HT in the Core i*'s), and people on the Pentium HT models we had to tell to set to a single core (because as noted, HT in the P4 was horrid), else the game would perform horribly. Mostly it was due to issues in the Unreal Engine itself, but still, a lot of games simply weren't "Optimized" for more than two cores, and some still aren't. Even with as predominant as 64-bit is starting to push, many game developers still release things in purely 32-bit builds, given with LAA and sometimes DX11 support, you still don't see many with separate 64-bit binary support included...

Edit: It is effectively going on 7 AM here, so I'm gonna finally crash for a bit, lol, be back a bit later.

Edited by Synapt

Share this post


Link to post
Share on other sites

As noted in my other post, i5's have no HT, so neg.

And to be honest the whole "Set core affinity" fix has been around for many games since the early 2000's.

Speaking as one of the guys from the America's Army Game project, we had to for many years before our 2.7.0 release, tell people with more than 2 cores, to set their affinity to only two (though back when the game was still pre-2.7, a lot of people had the HT on Pentiums, which was horrible compared to the HT in the Core i*'s), and people on the Pentium HT models we had to tell to set to a single core (because as noted, HT in the P4 was horrid), else the game would perform horribly. Mostly it was due to issues in the Unreal Engine itself, but still, a lot of games simply weren't "Optimized" for more than two cores, and some still aren't. Even with as predominant as 64-bit is starting to push, many game developers still release things in purely 32-bit builds, given with LAA and sometimes DX11 support, you still don't see many with separate 64-bit binary support included...

Edit: It is effectively going on 7 AM here, so I'm gonna finally crash for a bit, lol, be back a bit later.

XD Oops, accidentally forgot you were talking abou the i5 there, lol. For a moment, I was thinking about the i7 while reading the post in the link that Raggio posted, which talked about i7 processors, hence hyperthreading.

That discussion about the America's Army project sounds very interesting. I can see how fraustrating it must have been with all those hardware limitations and such. Still, I'm hoping that in the future, we'll be able to see developers shift over to the 64-bit architecture and start to utilize all these high-end hardware that hardcore gamers are buying.

Edited by Guru Nathan

Share this post


Link to post
Share on other sites

I created a topic similar to this (located here ) a while back to gauge my system (see signature) to what others experienced.

I love my high performance computer as well as car mods so I am willing to live with 30 minutes of game play on LCPDFR. That being said, I think a lot of it is that our machines are too advanced for the poor PC port of GTA-IV. The mods (high poly cars, LCPDFR, and others) do have something to do with it as well, meaning that it will inevitably crash because it's running things it wasn't designed to do.

get an SSD and have your game on that, so when it crashes it crashes fast and your back up and playing in under 1 min. (I love my SSD!)

wasnt GTA 4 originally designed to run on 8 cores? I remember reading it was for some crazy amount

my current set up is amazing, 650 bucks for everything but a case and I love it! I5-2400, with 6870, 8gb ram, and SSD, I will never go back to a conventional harddrive for my OS/games.

Share this post


Link to post
Share on other sites

2600K is the i7, I was -gonna- get it, but the only major differences were the HT in the long run, which, while nice, wasn't worth the $100 at the time I felt :P

And yes, more so when we'd have to continously tell people they had to basically gimp the game because the comp was too 'modern' for it, lol. Some games have started going with 64-bit versions too, just not many.

Share this post


Link to post
Share on other sites

@stormoffires

I have a Intel SSD on my New Egg watch list, just holding my cards for the right time to strike :smile:

@Synapt

I love my 2600K i7, the only thing I need to do is increase my cooling (and the SSD). I do not feel comfortable OCing it yet without better cooling. But honestly, I haven't needed to OC it for what games I play, which is not very many.

Share this post


Link to post
Share on other sites

Chad: Get a Zalman, pretty much -any- Zalman cooler with the tube+fin design will dissipiate heat like crazy.

My zalman 120mm on my E8400 had it running between 3.6GHz-4GHz and never going over 50C usually.

On this I have a zalman 132mm cooler on top of my 2500K, and it peaks out about 40-45C per core at max so far (though in the summer that will probably go up a bit as we get really humid summers here).

Share this post


Link to post
Share on other sites

Perhaps the architecture of the (formerly) newest i5 lineup isn't being taken advantage of. Some components have different priorities as time goes on depending on software of the day. I remember playing a game called Deus Ex (the original one, not Human Revolution) and for multiplayer servers, in later years we found out that the game calls for older CPU instructions like MMX, 3DNOW, etc and actually performs worse with a Core 2 Duo or Quad than a single core Pentium IV or sometimes even a Pentium III. I'm sure other things like drivers factor into it too, but when people first complained about GTA IV's performance on PC, the developers said that the highest graphics settings were intended for "future hardware", which is more or less a cover for them doing a horrible job porting over a less than Crysis-like game engine and having it perform worse than said game on medium settings. Quite frankly I don't know if GTA IV will ever work "properly" on any hardware config. The processor and GPU can definitely bottleneck each other, although I don't know for certain if that's whats happening here.

The bottom line here is that your hardware is good, but GTA IV hasn't reached the bottom of its barrel yet while scraping around for more PC hardware to gobble up.

Edited by unr3al

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×