Doom 3 Technology
article by Zaldron

There´s something about iD Software´s games that truly shines... the technology behind them. Game after game, iD software was able to marvel us with amazing graphics and superior engines. I´ll try to bring some light to those who know little to nothing about engines, as discussing what improvements will the DooM 3 engine bring to the gaming scene.

I´ll try to make the text as clear as possible. I´ll gladly answer any question or doubts...


First of all, remember this simple fact : This is a fan-made article. I don´t have inner contacts with iD Software, and I never went to the MacWorld Expo. This article´s half based on facts, provided by the tech trailer and Carmack´s comments, while the other half is based on my speculations. Such speculations comes from a mix of rumours, reading between lines the few interviews forwarded to iD Software and plain old fantasy.
Said enough, let´s continue.

Since the early days of engine creation, there are 2 main features that programmers focus on. While it´s important that every engine posses compatibility, modularity and ease-to-use, the key aspects are looks and power.
Looks are probably the most exploited feature in today's engines. Thanks to the gaming scene, the video card and processor manufacturers have biased their products features and goals in order to archieve tools where programmers and artists can expand the visual appeal of the game.
Power, instead, is more related to the CPU. Fancy graphics aren´t everything for engines, contrary to what many people believe. The flexibility wich the engine manipulates objects and events is even more important than the general looks. A door can be super-high quality texturized, iluminated with colorized lights, etc... but if it can´t be opened, it´s worthless...

You might ask... what was the intention of writing such introduction? As most experienced gamers out there know (by experienced understand "older"), this new engine means a new breaktrough for the gaming scene. But that´s material for the next chapter...

iD and technology trought history

Back then, when Wolfenstein was released, people couldn´t imagine a more accurate representation of reality. It had it all, from perspective to continuity, something that many old first person games lacked (for example RPGs like Ishar).

After this winning title, iD Software released DooM (among other things wich aren´t as important as this one). DooM, while it wasn´t using any polygons at all, created a world so flexible and graphically stunning that people complained of "motion sickness". The engine was fast, powerful and especially flexible. The whole meaning of "engine" was practically forged with the release of this game. It is actually impossible to count how many DooM user made modifications (from simple maps up to full-fledged TCs) have been made, and it owns the easiest set of tools by far.
The game won many awards, because of it´s superior gameplay and marvelous tech, but what few people noticed is that iD made a giant footprint in the PC´s history. People ACTUALLY moved from their 286s and 386s to a shiny new 486SX/DX just to play the game. It is obvious that since then iD pushed the hardware´s direction to their own way.

Quake did the same. Altough some people refused to move to this new franchise, and kept playing DooM ][, it´s easy to notice that Quake made such an impact on the way games are made, that today´s most famous games among the hardcore gamers, the FPSs, follow the same basic rules as this title.
Quake was the first step into real 3D worlds. Instead of the texture skewing methodes used in DooM, this game was based on texturized polygons. But the key feature wasn´t the slopes, or the multi-leveled floors, or the dynamic objects...

It was the lighting. Shadows were everywhere on Quake´s macabre worlds. To trick the eye into believing that a flat surface is actually a virtual enviroment, lighting must be added in order to give the brain an illusion of perspective. But all of these marvels came at a price. Those 486s that ran DooM like a dream suddenly became uncapable of holding nice framerates in Quake enviroments. People upgraded to Pentium class chips (I know I did) for this game, and it turned to be a wise move, since from this precise moment until now the developers have been struggling for make each game´s visual as cutting-edge as possible.

Altough nothing´s perfect. There were 2 critical drawbacks with this engine. One was the pallete of colors. Most people complained about Quake´s dull, brown, gray, blue and beige colors, without knowing that it was the only solution for archieving lighting over surfaces. Since each material needs a wide variety of reserved colors for displaying darkened/brightened portions of the surface, the game was limited to just a few colors and all their respective shades. This was when 8-bit displays (just 256 colors) ruled the Earth, and superior color depths were impossible for the mainstream video equipment.

The other problem is even more profound, and its solution has been just recently discovered. The static environments. Once the compiling tools of your favorite map editor finish the calculations of light/shadow placement, among the visibility calcs (wich prevents the game from drawing useless polys no one´s going to see at certain moments), those results were stored in the classical .BSP format, unable to be modified.

Think about it, the compiling tools often take hours to finish the process. And this is made taking in mind the level mantains the same shape as the one present in the map editor. Imagine what would be the consequences of altering the world in real time. Move a wall, and those visibility calcs might not be useful anymore. Move a light, or place something hiding it, and all the enviroment´s shadows should be modified. Since each "state" of the world is as complex as the first, primal one, you´ll be forced to suffer all those hours of compiling again, everytime you modify something. Not precisely my definition of "fun".

But you´ll wonder. Then how those doors, lifts, destructable walls and rotating gears work? The solution was simple, but painful. These things that are capable of altering the level´s shape are ignored in the "compiling stage". What was the cost of this fix? Objects that move/rotate/change would have either :

1) Shadows that almost never match the realistic solution.
2) No shadows at all.

It´s been like that for years now. All the games using Quake, Unreal, LithTech engines suffer from the same "staticness". No major changes to the world´s shape, or it will run too slow. No changes in the light conditions, except flickering and other "intensity" effects.

One of the problems is solved. After Quake, iD embraced OpenGL, a practically unknow tendency back then, and now the mainstream (altogether with DirectX) API for graphics programming. Quake 2 was greatly enhanced when activating the OpenGL renderer. The brown was replaced by a miriad of oranges, blues, reds and greens. Quake ]I[ expanded the looks of the engines by adding shaders, curved surfaces and hi-rez textures.

Each upgrade of the engine has been big. There´s a heavy difference between average polycounts, texture sizes, and features included, not so say optimization and bug fixes. But for the first time in many years, iD´s willing to make the same jump they did when making DooM, and repeated with the original Quake.

The New Engine

I think the trailer showing the GeForce3 performance, released at the last MacWorld Expo pretty much gives a solid clue of what we´ll be seeing in a couple of years. A detailed analysis of the video (released on AVI, ASF and MPG versions, all with the same content) can be found a couple of pages below. I´ll break down this chapter into the best features this engine will come with.

Dynamic Lightning :
You might be confused with the dynamic lighting of games like Quake and Unreal. This one was a completely different thing, not to say inferior to the new technique. Quake and similars had special lights that followed luminous objects, but these were just highlights over the surface of the environment, no new shadows were casted, no changes were made to the lightmaps.

A lightmap is just a little pic made of gray shades, wich is combined with the textures in each polygon (by a technique called "multiply multi-texturing"). This lightmaps represent the brighter and darker areas of the world, and they´re the result of the painful and slow level compiling.

The new engine completely discards the use of lightmaps. The new renderer, wich needs a very powerful GPU, and specific features supported, is able to calculate the light value of every pixel in the screen at each frame. This means the lighting info´s never stored. Levels file sizes are much smaller now, there´s no compiling stage to archieve complete lighting, and the way these lights are rendered depends on the world´s condition.

The DooM 3 engine will pull out wonders that are only experienced in reality and several games (wich use completely different techniques and don´t match this quality). The moody lighting actually enhances the gameplay, giving the developers a lot of tools to scare the player with tricky shadows. You can see the new renderer´s power by looking at the trailer, and I can give lots of examples. Imagine the shadow of your character wander around you, as the light sources change of position, stretching across floor, walls, ceilling and objects. Imagine your enemies, casting shadows that will reveal their positions, altough you may not see them. You´ll caught strange movements in the corner of your eye, only to realize that it was just your shadow. Chairs, tables and computers will cast their respective shadows, wich may look like dangerous things (by being fooled to see things that are not there). Stand next to the only lamp on the room, and your shadow will engulf the whole room. Approach to a torch, and you´ll see the shadows tremble and flicker.
In conclusion, the imersiveness of the game is boosted greatly. It will mean a whole new level of terror, something that fits perfectly with the DooM franchise.

There are some problems with the new system though, something that doesn´t really affect the playability of games like DooM. While the RAD compilers of Quake 2 and 3 had radiosity included in the calculations, this engine will not have this kind of lighting, mostly because of a tech limitation. In real life, a light source will cast an undefined amount of raylights to every possible direction. Where those lightrays collide (with the surrounding matter) part of it will be absorbed and the rest reflected. That way, a lamp can uniformily lighten up a room, without strong, black pitch shadows. Since a lightray bounces many times in most materials, you can imagine the tremendous boost to the light level that "indirect lighting", as it´s called, provides to your surroundings. This time, the engine won´t be able to pull out reflections, wich means that places were no lightray gets, the engine will draw complete blackness. This doens´t really reflects badly on a game were that it´s supposed to be scary.

One would think that the GeForce 3 is some kind of demi-god, allowing us to make something that developers could only dream about some years ago. However, there´s still a problem with the new tech, an issue that graphic programmers like John Carmack have been attacking for the last years. 32 bit color precision.

One would ask... why do we need more depth? isn´t 32 bits all that the human eye can see?
This is not entirely true. First than all, while today grapical APIs use 32 as the maximum depth for gaming, the actual color count is still 16,777,216. If you´re not familiar with the number, let me clarify it for you, it´s 2 at the 24th power, commonly refered as "24 bits". Why then we spend more processor and bus power to handle this unseen 8 bits, and why does Carmack NEEDS more?

The answer is mainly a programming/performance issue. A video card is able to pull out frame after frame of the game by reading the structure data of the world, calculating the perspective, filling the triangles with the right textures and pasting the result (called framebuffer) into the screen. You´ll notice that between each frame there´s a couple of steps done by the processor, and that was just an example, normal games do what we call 8 or more "passes" before seeing something in the screen.

Each card has its own set of advantages and disadvantages. Some of them are able to combine 3 textures in a row over a triangle before rendering, others don´t. Imagine you´re rendering a triangle with 3 textures, the main, the lightmap and a "detail texturing" one (explained later).
A card with 3 textures per cycle would run this game nicely, but the ones with, for example, 2 passes per cycle will be greatly affected. Since they haven´t finished the scene by the time they have to render the image on screen, they must link the results with the next cycle, just to use that first texture pass in order to finish the rendering. Leaving aside stuff like AI, math, sound and others, you can imagine the tremendous framedrop :

1 rendering per pass = 60 fps (hypothetical, for example´s sake)
0.5 renderings per pass = 1 rendering in 2 passes = 30 fps

Going back to the 32 bit thing. Those 8 extra bits are used to store info known as "alpha channel". This channel, unlike the RGB ones, is not visible, it serves to specify how opaque pixels are, a great tool for creating things like decals, special effects, etc. Since the data already comes linked to the image, you don´t need to spend a whole new pass in order to retrieve this information from the RAM.
A game like DooM3 is using more than 20 texture passes right now, wich means the game will be almost unplayable on older GeForces and feature-disabled on 3dfx and/or ATi cards.

This passes explanation had to be done in order to explain precision. Imagine you´re making 20 passes over a pixel. Why? it doesn´t matter now, but Carmack´s doing it. Now, the only thing you can make to a pixel is changing it´s color. You have the input color, wich is based on the texture you´re using for that polygon. This data is commonly refered as RGB values, and they are often valued as 3 numbers ranging from 0 to 255 (the Red, Green and Blue components). However, in OpenGL, these 256 variations for each tone are expressed in a real number located between 0 and 1 (This was done for greater speed on calculations).

The math involved on pixel lighting and stuff is very complex to act as an example. To show the precision problem, I will use a simple equation.
For some reason I want to reduce in half the brightness of a pixel several times. Each time I do it is a pass. We have 1 as the first value, and this will be the result after 5 passes.

1 : 1 / 2 = 0.5
2 : 0.5 / 2 = 0.25
3 : 0.25 / 2 = 0.125
4 : 0.125 / 2 = 0.0625
5 : 0.0625 / 2 = 0.03125

0.03125 would be the real result of the calculation. However, these RGB values don´t have that much precision. Imagine that the system just processes 2 digits after the 0 :

1 : 1 / 2 = 0.5
2 : 0.5 / 2 = 0.25
3 : 0.25 / 2 = 0.12
4 : 0.12 / 2 = 0.06
5 : 0.06 / 2 = 0.03

0.03 is very distant from 0.03125, and that error will be visible in the screen. Since there are many numbers close to "1" that will finish receiving a result of "0.03" many pixels that should look like a gradient of shades will have the same color. This is called "banding" cause the light intensity on the surfaces changes after a couple of pixels, making concentric "rings" of different intensity. This won´t be too noticeable on DooM3, Carmack´s making everything he can to diminish the effect, altough this would be awfully noticeable when several lights are placed almost in the same coords (for more info about this check Carmack´s .plan file).
In conclusion, what good would 64 bit precision do? With these extra bits, developers are able to store the rest of the digits involved in the operation. This is called "mantissa" (this is how ancient romans called fractional portions of a number). This little inconvenience may sound problematic, but the whole new world  that dynamic lighting provides to developers, modders and gamers will be so dramatic that the problem will not be a priority until several years.

Bumpmaps :
If there´s something fairly obvious in today 3D games it's that we are far away from having realistic looking models. While artists do amazing stuff with the tools they have, a model just can´t compare to the colossal detailing of real world objects. Humans are quite an example, think how far are we from modelling individual hair, veins and cracks in the skin. And that´s just the beggining, molecules are just nightmares for now.

By looking at the tech trailer, you can see there was a huge improvement on polycounts, in part because of the raw power of the upcoming tech, in part because of the better use of dynamic lighting that it would imply.
And while I can say the polycounts are bigger, there aren´t as big as one would initially think, and that´s because of bumpmaps.

A bumpmap is just a pass over a texture. From the many kind of functions one can perform on multi-texturing, bumpmaps are among the hardest. They require fairly complex calculations, and need a couple of passes before output.
Bumpmaps have sometimes been mistaken with "detail textures". A detail texture is an gray-scaled imagine that tiles fairly well, and it´s rendered in a special way over other textures. The purpose of these textures is to add detail to the world. Even the most beautiful and large textures look awful when you´re glued to the wall, because just a couple of the original pixels are seen. This is fixed by mixing the image with these detail textures, which are scaled down to look less stretched when close. For example, one can enhance a wall´s look by adding an image that gives roughness to the wall.
The way this pic is combined is fairly easy, it´s the same as lightmaps. Darker pixels means darker output pixels, while brighter ones remain unchanged.
However, this is not realistic, since it adds a shade over the polygon (simulating detail) when the light positions could be demanding for another kind of shade. These pics remain unchanged, no matter where the lights are, wich makes a wall look "drawn over with a pencil" instead of deformated.
A bumpmap takes a whole new approach, instead of just mixing the pics, the map is just used as a tool to assist an internal lighting calculation. A texture gives no clue on what parts are rough, soft, embossed or cracked, it´s in the brain where the illusion is created. A bumpmap gives to the hardware the data needed to calculate the optimal result.

The bumpmap is just a gray-scaled image, just like the detail texture. But in this case, the key is to NOT give any idea of lighting condition, but instead make an heightmap of the texture where it will be applied. Brighter pixels means "embossed out" pixels, darker pixels means "embossed in" pixels, while mid gray pixels (RBG values set to "127") means "unchanged". With every pixel having an imaginary new coord, the lighting process mentioned above now bases the result of the operations on this data. In this case the example images are far more informative :

As you can see, the realistic looking texture are breathtaking, and since this is calculated on real time, a dynamic light would cause a new result every frame, as you can see in this animated .GIF.

Lighting without bumpmaps

Lighting with bumpmaps
Bumpmaps will imply a whole new way to draw textures. While today skinners and texture designers put all the shades of the little details on the pic itself, now it must be done in a separate one. Using today textures in this engine would look flat and boring. This will make textures harder to draw, but not by that much. Since most textures are shaded using this same technique on drawing and modelling programs, the data can be easily extracted to a new file. (for artists, think about 3D packages bumpmaps and special layers on 2D drawing programs).

If you want examples of this feature on a game like DooM3, I recommend you to look at the trailer. Altough it´s blurry, you can clearly see the bumpmaps in action. Anyway, here´s a couple of examples : tiled floors and brick walls that look like made of many little objects, realistic keys on the computer keyboards, the rough skin of a demon, the serial number of the weapon you´re carrying embossed, realistic bullet marks on the wall, etc.

This are the 2 main features on the visual department. There´s something interesting to note here. Since the game rendering is pixel based, the game´s heavily influented by the resolution you´re playing.
As a rough example, imagine you´re using 640x480 and the game´s spitting 80 fps. In 640x480 we have 307,200 pixels being calculated before drawing the frame. In 1024x768 we have 786,432 pixels, wich is more than the double, resulting in a framerate of 31.25.

Looks like will be cutting back on resolutions just the way we did when Quake came out, at least for now.

I´ll like to touch the minor new features that will be present on this game, or at least the ones I have proof about. One of them is highlights. Unlike in reality, surfaces in games tend to be diffuse, while you can see there are many shiny materials around you. This has been solved with reflections casted over the surface, and while this is a feature that will be present on the game, highlights are more important. The highlights are basically used to display shiny surfaces, giving them zones known as "hotspots" where you can see a bright reflection of the surrounding lights. By modifying the intensity of these hotspots, and the size of them, id´s able to enhance every surface in the world with lifelike surface behavior under lighting. Plastics, glass, metals, clays, stones, marbles, you name it.
This effect has only one drawback, it´s vertex based. This means that the effect would look ugly on flat, low polycount surfaces. Since it´s being used on curved surfaces this shouldn´t raise too much of a problem.
By my last statement, you can see that stuff like the curved surfaces of Quake3 are still here. This objects, called NURBSs, are mathematical approximations of theorical perfect curves. Since there´s no fixed data on this engine, you´ll be able to modify the amount of detail in these curves to match your needs (by modifying specific polycountson these curves).

Engine flexibility new features :
Not much is known about the engine´s working. The only official info so far is that the key of the engine will be of easy-editing. id have linked their new incarnation of the editor with the engine itself, allowing them to make real-time fast previews of the map while creating. Since there´s no compiling stage, every change you make to the world will be reflected on the preview screens at real time. I touch a few possible features in the frame-by-frame, located precisely where the feature is most visible.
Most things here remains secret until the game SDK (wich should be added on the game according to id) hit the masses.
One could ask a performance question. How strong is the GeForce3? The video is a blurry small ASF, wich only displays 30 fps. The output on the MacWorld Expo was from a projector, wich makes it harder to guess. My only clue comes from the desktop screenshot from Jim Dosé, where you can see Visual C++ running the game in a compiled window. The game startup screen is 800x600, so we can assume this was the resolution being used on the show.
Another thing to mention is the sound engine. A game with such scary visuals demands for killer audio. Graeme Devine´s handling the sound system, and if it´s EAX based, we can count with features like oclussion, reverb, echoes and positioning.
The game world is dynamic, but that doens´t mean it can be fully destroyable as Red Faction. It means that there´s no limits on the way/how many/where brushes behave. Map designers will need to learn a couple of new tools based on the Portal visibility system in order to enhace framerate in certain areas.

The portal tech is the replacement of the VIS calculations. Instead of storing a big database on wich triangles are seen in certain positions and not, the Portal system breaks the playable volume of the game on convex areas, and then makes some simple raycasting in order to find which convex areas are visible. Those who are have their faces drawn. Things that NOT shape the world, like rocks, chairs or tables should be marked as "detail" in order to stop the engine of slicing the map on more volumes, wich would be slower.

One last thing. After seeing the polycounts of the game and realizing the sheer complexity behind the new renderer, i´ll say frag feast like the one you can encounter on the older DooM´s will not be present. The game will probably rely on cunning AI, since now there´s CPU power to waste on this feature. Most people would think DooM´s about being in the middle of hundreds of monsters, while the real intention of the id guys was to create a scary fast-paced game.

Frame-by-frame analysis

While this trailer was released in several formats, all the versions have almost the same detail. Each scene showed is somewhat explained in each paragraph, making emphasis on the features best used. This frame-by-frame is a revised version of the one found on the Doomworld messageboards under the DooM3 forum. Nothing more to say, here we go :

Nothing to say here, the lighting looks great, but then again, not much to say. My only tought is the presence of shaders again in DooM3. A shader is a piece of code that combines several images (standard pics or camera outputs) using any multi-texturing process handled by OpenGL. Besides mixing pics, the shader code is able to transform in some way (warp, move, rotate, scale) the different layers in order to archieve special effects. It´s worth mentioning that shaders can also alter geometry, like seen on the animated water surfaces of Q3A.
As everyone sees here, the blue intense id logo remains at max brightness altough the things is rotating under a distorted light. I think there´s a reflection map there too, so I have enough clues to presume that some sort of alpha version of the Shader code is in. Probably the special effects for the lights are handled by a shader, since, for example, building a grid in front of a light, just to archieve that cool "stripes of light" effect is plain ridiculous.
I´ll say projector maps are the feature used here. Just imagine the possibilities of proyecting pics/shaders with a light (in the same way a projector displays images). Taking a realistic example we can mention Duke3D. Remember E1m1? the cinema? The projector that "casted" the movie over the screen now can be done in a realistic way. Put an enemy in front of the rays, and his surface will be colorified with the AVI, put yourself in front of the projector eye, and the screen will go blank. You can even put your hand in front and say hi. Think about the possibilities...
The first thing I want to note is the floor. Look at those tiles, how well they behave under those light conditions. I can tell you, that floor is as flat as Quake1/2/3 floors. That´s how the miracle of bumpmaps works...
As Carmack points out, this is the first time that he can do realistic lighting across every surface of the world, without using fake methods.
On Quake days we used to have :

# For architecture lighting : Lightmaps compiled inside the BSP
# For model lighting : Per-vertex Gourad shading
# For model shadows : Circle decals or scaled polys.

Now everything is handled the same. Look how the shadows blend across the beast surface perfectly.
One nice detail to mention is how the monitors remain unaffected by the creature´s shadow. While the corners of the monitors are darkened by the creature´s head, the screen themselves remain unchanged. Another use of shaders.

This looks pretty much like Seneca Menard´s "iD heart" model for the Team Arena intro movie. There are 2 important matters to discuss. First, check the new "average polycount". There were days that we had 12 polys shaping a cubic room, while now there are +10,000  polys JUST FOR A BRIDGE!!! Most of the kickass detail here is performed by bumpmaps, instead of actual geometry, if the model impressed you now think again what can be done with such a marvelous feature.
Here´s where Graeme Devine´s comments about dynamic worlds comes true.
In the Quake engine, the dynamic brushes are handled in an effective but strict system. You group some convex brushes and tie them to an entity. This entity can move or rotate the brush under certain triggers. If you need a special axis to rotate the stuff, you create a "ghost" brush using the -Origin- texture. The center of this brush (that is not rendered in the game) is the center of rotation.
Look at what we have now, a brush that moves down, rotating with acceleration, and suddenly, a PART of the group rotates in a completely different axis.
My assumption is : there are no more brush entities in the way we know them.
And what are they using now? There are many ways to code this feature, but I suppose they´re going for controllers. A controller is the behavior of a certain group of objects. This behaviors are rotate, move & (probably) scale. After doing the triggering logics, you make the animation. How? you´ll ask.
Imagine a time segment where you pick individual frames and change the parameters.
For example :
I have a simple object, let´s say a ceilling mounted turret. Or a flying camera, whatever. I want the animation to happen in 2 seconds -> 60 frames (not videogame fps, but cinematic time).

1) I jump to the frame 20 and make rotateZ 3600°
2) Then I jump to the 30 and make moveZ -200 and moveX 0
3) After that I go to the frame 60 and make rotateZ -90° and moveX 40

The final blend works like this :
The thing goes down 200 units at 1 second in constant speed, while it rotates very fast 10 times. But the object has another rotating order to work with, so the rotation goes slower and slower until it reaches 0° per second, and then rotates 90° to the other way in 1 second, slowly taking speed. The object must move to the right (X) 40 units, but in the frame 30 there´s moveX = 0, so the object starts moving from 30 to 60, acquiring speed.

This is how 3DSMAX and LightWave works with object animation. Each group can have sub-groups with their own controllers (like the bridge part separated from the whole object in the movie). The controllers can be very different between them, Linear controllers doesn´t posses acceleration, Bezier ones do, Noise ones make the animation all jumpy, like broken mechanics.
OK, that was hard to understand, and harder to explain in words. But when all of you get your hands on the editor, you´ll be blessing iD...
More proof that the polycount went nuts. A complex array of normal brushes and bezier patches (hence we have NURBS working) under dramatic "stripe" lighting. This is how the game looks normally, all that detail, and all that atmosphere...
At last, this is something that game companies wanted to do since the days of Quake. I saw this in Unreal, Quake2 and Quake3. But at last is working, I´m talking about the "Fan casting shadows" experience. Anyone here remembers that Starcraft cinematic where a group of Marines must detonate a Zerg-infested research vessel? Remember when they´re all looking to a big fan that casted shadows all across their faces, hearing Hydralisk groans?? That piece of experience can now be fully translated to a game.
Those tech floors look really good with the bumpmaps on, and if we continue the movie we´ll see a nice alarm-lamp effect. Try to spot the extinguisher as the camera rolls on.
Look at the light casted over the floor and the door. It´s distorted (thanks to a projector map) just like a flashlight spot. If you turn on a flashlight in your home, you won´t  see a perfect round spot, but a irregular one due to the reflection cone and the bulb shape. If DooM3 has a flashlight (and I hope so), you´ll see this effect fully translated. Besides, the flashlight will cast a cone, no more of that crappy circle like in Half-Life or Blood 2. You´ll create new long and dark shadows too.
A comment, what are those things under the fan? They look to me like the New DooMGuy armors (wich can be seen later)...
Note that as the camera keeps moving, you´ll notice different types of light. Quake 1 had a simple light system called omni lights. This are imaginary points that cast light on all directions. Those are back, together with the most creppy projector lights (like flashlights) and the directional lights, used for light sources so big (like the sun) that the lightrays are almost paralell.
There´s something more here, highlights. If you look to a just waxed floor, you´ll see besides the neat reflection a big white spot due to the light. You´ll see this effect many times in the movie. Besides, iD had included the strenght/diffuse parameters for each texture that you can make realistic plastics, clays, metals, wood, etc.
My first comment : iD, I´m really proud. They discarded the porn-star like armor of DooM1&2, and went for a Quake2/Starship Troopers style. Very good. The armor has some stuff in the front and in the back, they look like life-support systems. Check the detail on the faces. Those guys have more character than anything I've ever seen. Their eyes glow at the presence of light. The bumpmaps & the high polycount make every hole and bump in the skin true. Even the shaved head has a bumpmap to simulate the roughness of the skin due to the short hair. The best stuff is the foot-light, wich makes the faces really creepy...
Look at the talking guy, that marine is probably the main character. I must say that he´s a really nice alter ego. Check him speak and move his eyes. That´s probably Fred Nilsson´s job, the new guy at iD working as animator and with CGI film creation as a background.
And then, the Zombie, perhaps one of the best ways to remember DooM. This is probably the weakest enemy in the game, imagine what Bosses and hard enemies will look like.
Moving lights, highlights, bumpmaps, enemies, NURBSs... everything´s here and working. The ability to move lights and sound (like Devine said in QuakeCon2K) is present here. So imagine how cool a rocket flying down that corridor in slow motion will look like...
This is clearly a sign that iD´s really trying to pull out cool scripts. I don't know what the script system will be like, you should ask to a Quake modding programmer. You can even check how the jaw of the zombie swings like loose when he stares at you. If you check his groin, you´ll see the skin resembles a bite, or a shotgun blast.
Check the yellow glowing eyes, definivly a shader on a model (wich there was no proof before this one). I´m certainly hoping to play sequences with that enemy on the dark, shooting at his menacing eyes.
Here we have Jim Dosé new animation system in action.  Blended skeletal system, good. Skeletal is a really good way to save up RAM, and to enhance the amount of movements per character. When you do a skeletal system, you can do it in 2 ways : normal and blending. Normal is when you make each limb a separate model, and then make them look like a character by grouping them. This looks crappy, because the joints between the models looks ass. Jim Dosé made the F.A.K.K. 2 animation system, so he haves some experience. He pulled a blending system, that allows a bone system to distort a mesh based on the assumption that "more distance between the bone envelope (a surrounding zone that should match the limb being dominated by the bone) and the vertex, less movement done". A pretty old system, it was on Vampire TM, for example, but it´s the key to archieve smooth and believable animation in proffesional 3D packages.
  Back to the id logo, sadly the fun is over, and it will probably be the last in-game shot we´ll see in a couple of months. Notice the bumpmaps giving life to the tech below the logo, and in the logo itself (you can see nice details on the bevels).

That´s it. I hope I have made a comprehensive article, at least for the most of you. Well, only if someone actually read it of course.

- Zaldron

Forward them to or meet me at ICQ UIN: 18477495
I can also be reached on the forums, wich is the most favorable.