Polygon Count
Polygon(Quad) vs Triangle Count
When modelling an object artists deal with polygons. Usually the polygons are quads; four-edged polygons. It makes it easier to model and problem solve. In game engines however, do support triangular count instead. Many engines can automatically convert quads into triangles, but it's best to do it earlier, as the last process in the modelling software, for two reasons:- some triangles should be converted manually for better animation outcome
- then can see the poly count the object will have when being transferred to a game (therefore they may think if to not reduce it further)
Triangles vs Vertex
Vertex count is the most important factor affecting computer game performance and memory space. The vertex count is affected by UVs, shading and smoothing, also it doesn't equal the triangle count.However, the vertex count is sometimes overlooked, as majority relies on triangle count itself as the measurement of performance.
File Size
Rendering Time
The term rendering refers to the calculations performed by a 3D software package’s render engine to translate the scene from a mathematical approximation to a finalized 2D image. During the process, the entire scene’s spatial, textural, and lighting information are combined to determine the color value of each pixel in the flattened image.
Rendering Techniques:
There are three major computational techniques used for most rendering. Each has its own set of advantages and disadvantages, making all three viable options in certain situations.
- Scanline (or rasterization): Scanline rendering is used when speed is a necessity, which makes it the technique of choice for real-time rendering and interactive graphics. Instead of rendering an image pixel-by-pixel, scanline renderers compute on a polygon by polygon basis.
- Raytracing: In raytracing, for every pixel in the scene, one (or more) ray(s) of light are traced from the camera to the nearest 3D object. The light ray is then passed through a set number of "bounces", which can include reflection or refraction depending on the materials in the 3D scene.
- Radiosity: Unlike raytracing, radiosity is calculated independent of the camera, and is surface oriented rather than pixel-by-pixel. The primary function of radiosity is to more accurately simulate surface color by accounting for indirect illumination (bounced diffuse light).
http://3d.about.com/od/3d-101-The-Basics/a/Rendering-Finalizing-The-3d-Image.htm
Real-Time
Real time rendering is used in interactive media, where user affects the flow of gameplay. Therefore the image has to be constantly rendered and updated. Usually the image frame is being rendered 30-60 times per second (which is called FPS), but the number can be greater or lesser. In order to make the image acceptable and fairly pleasant (depends on graphics) all the in-game elements must contain as little memory space as possible, otherwise the image 'lags'. Apart from that user's computer must also be able to hold and project the game image.
Non-Real Time
This type of rendering is used in videos, films and any sort of animation. Without specified time restrictions it allows the developers to create much more realistic and detailed imagery. After all, user is only going to watch exactly what has been rendered and doesn't make choices affecting the action flow. Depending on the animation's complexity, the single frame can be rendered in few seconds to several days(!). After all the scenes are rendered, they are then displayed at chosen frame rate.