Discussion in 'Article Discussion' started by CardJoe, 16 Mar 2009.
Some cheats I really don't get, there's a couple of games where cutscenes are pre-recorded. It destroys they resolution, you get artifacting and it makes the game look really cheap and tacky. Look at Gears of War 2, really nice looking game when you play, cutscenes (like the "war speech") look atrocious.
it's most likely because the ingame geometry isn't rigged for facial animation. they have to render from the original models, which means prerendering them. it also allows for a quick cut to a cutscene, which is nice - you don't sit around waiting for it to load any extra assets needed for only the cutscene.
of course that doesn't mean they should look awful - they're probably rendering them at a terrible low res or something. bit stupid, really. although i guess with DVDs it can get a bit cramped for space so it was probably one of the first things to get gimped.
Less design cheating could be possible if people would get out of the computer ice age and move up in the world. Devs are restricted to US (gamer) and can't do amazing things unless we all get our rigs up to modern gaming. Also did you look into what engine they are using?
When I did a computer graphics course at university we were constantly told to find cheats to render and model scenes faster/easier with it still looking like you did it properly.
that's a bit of a luddite way of looking at things. 'cheating' isn't cheating per se. occlusion culling (as the superset of backface culling) is a technique that is always applicable. who the hell wants to draw polys you'll never see? even on the most cutting edge hardware, this makes no sense. the less polys you draw offscreen, the more you can draw that players will actually _see_
yeah jamie has it on the nose. the quicker you can do something whilst retaining 'it looks right' quality, the more of that stuff you can do, and the shinier it gets.
Of course you would, maximise CPU/GPU availability
I understand the efficiency side of it. I just don't get why you'd half-ass (large pad cover up) something visually instead of working to fix the issue. Yes, it costs money to develop games, but if you're a veteran company these issues could be resolved instead of covered up.
I'm just saying if you could build a more powerful engine (which require beefy hardware) and it gets rid of the most common flaws in designing, then why not do it. If it requires people to build super computers then that's a good (advancement in gaming)/bad thing (cost of hardware).
Even taking this to game level design, for example a cs:source map.
Optimisation is the hardest part, you can easily design a good looking game level with lots of brushes and textures but as soon as you go to play you get 10 fps.
Usually due to poor or non optimisation, the whole level has to be drawn in game which kills frame rates.
There are many cheats :
3d sky boxes being most common, making the map appear much larger than it is,
env_fog to limit detail in distant objects,
default models that eliminate the use of world brushes.
no_draw texture on area's not visible by the player.
cs_assault is a good example of using a shadow texture in the warehouse rather than the engine having to calculate a dynamic light source, this doesn't detract from the look but lessens the pressure on the game engine and your PC.
But the biggest cheat being the design of the level in the first place, unfortunately no level ever end up the way you original envisaged it, as you inherently design the level with the game engine limitations in mind.
You don't become a veteran company by forcing your users to spend thousands on new hardware. Look at valve and blizzard, their games have consistently low requirements. Ignoring the fact that there is no way any major engines released from now on will exclude the current console generation, people will always go for what looks better on the surface. Why have a game engine that looks 3 years behind something else if you can easily make imperceptible work arounds?
The gaming industry is going to remain stale if people don't upgrade and companies keep selling recycled ideas. If there is no advancement in gaming then what's the point of having quad-cores, i7s, 6gb of ram if a game doesn't use the power? The console is another evil that is crippling the industry.
Valve created a great series which DID make people buy hardware to work with their innovations (HL->HL2/The Lost Coast/The Orange Box). But the keyword there is INNOVATIONS something most companies left out of their games. It might not of been a whole new rig but it caused hardware changes. Yes they have low requirements, but you cut out visuals, resolutions, AA, AF etc etc.
The devs build around us and if we're lazy and never upgrade then they'll just build the same **** with a different colored box. Some companies are getting sick of this and have even gone so far as to require you to have vista and a DX10 card.
That's the way Crytek thought, look at them now, wanting to develop for consoles because PC Gamers are retarded. I remember when I bought Crysis I had a terrible PC, I used to run BF2 in the MID/HIGHs, I put in crysis and of course I had to play it LOW but it was amazing anyways, the technology that drives it is not only how good it looked because it was better than BF2 and ran at the same speed, the engine is incredibly optimized for whatever your PC can handle.
But of course, everyone wants to play it "Very High", so they don't sell a copy, and where does your so called "evolution" go?... Down the drain. People don't get the part that it's not at the setting you play it, but how it looks on your pc compared to other games(Crysis at medium quality looks better than COD4 maxed out and yet people prefered COD4 "cuz they could max it"). GTA IV on the other hand is a HOG, a nasty HOG.
erm, there is no half-assedness here. lazy programming would be to do everything 'properly' as you seem to be implying. the fact of the matter is that even the most cutting edge hardware will fall flat on its face if forced to model everything accurately in even 2 generation-old games. we have a word for this method of doing things in computer science: "naive".
developers do program for the top tier of hardware - you see more shinies on higher settings. the fact is they still have to program for a wide gamut of hardware - not everybody is made of money and can afford to buy a £2k rig every 2 years.
your point about HL2 is also completely false. the Source engine was designed from scratch to scale extremely well with hardware that at launch would be considered past it. and you know what? it cheats as much as every other game out there. the buildings collapsing in episode 1 and 2? all precalculated. lighting? precalculated. it does occlusion culling. hell, it does every trick in the book (of which there are many - look up "GPU gems". my PhD supervisor has contributed a bunch of chapters in those books). these things are the REASON it was so good, not an excuse.
(whoever chose the title of this blog post [blost?] is an unwise individual! i wag my finger at you!)
+What Fod said - everything is a cheat. Even full on pre-rendered raytracing a la Dreamworks movies is a cheat - they don't render true photorealism because to do that you'd need to simulate hundreds of interations for each of trillions of photons. Computer rendering eats every ounce of processing power you can throw at it, so what it comes down to is working out how much processing power you can afford to spend on rendering a scene, then squeezing as much image quality out of it as you can. That means eliminating unnecessary overheads wherever possible by aggressively culling occluded objects, pre-calculating shadow masks for static objects, using bump maps etc. to increase apparent model complexity without unmanageable polygon counts, compressing textures to minimise memory bandwidth, and loads more. Raytracers mostly model using rays projected from the "camera", then add a few from the light source to achieve some extra realism; they limit the number of times a ray will bounce or split; they simplify geometry and optical physics. The list is endless.
Thx for the info Fod & Mclean.
Sorry, but i think you're mistaken again, games DO use all the power of hardware, that's why there are workarounds "design cheats", so you can use hardware power in other areas, take games like DOOM 3, ppl had to get a new PC to play that game, then FEAR, then CoD 3, then you have Flight Sim, gears of wars, Crysis, world in conflict.. etc
All this games are beatiful and hardware demanding, so there's no laziness i think, rather great ideas on how to make a huge heavy game run on most Pc's. Even in the powerfull ones.
AND, have you seen any the reviews of all those games in their year's releases, taking benchmarks in super powerfull single and SLI videocards, with all the options maxed up, you can clearly see how those systems cry trying to get the game above 20 - 25 fps
I love the cheats Company of Heroes uses for explosions. They look AWESOME and are really light for the eye candy you get!
I wish Empire: Total War used that sort of explosion for their artillery. It would fit in PERFECTLY!!
Has anybody else seen the nice explosions in CoH? Do you think E:TW's explosions are lacking that sense of destruction??
ps: this forum uses excel's palette.. i feel at home!
Separate names with a comma.