Talk:Normal mapping

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Revisiting "Plain Language" explanation[edit]

About two years ago, when I first started editing Wikipedia, I added a "lay person's explanation" of normalmaps, based on what I saw in HL2. Since then, it's been rephrased, made more technical, had references to what a normal user would experience removed, and then was removed entirely for being redundant. The article has continued to grow and evolve, and that's something I like. However, when I take a look at it now, just as I did several years ago, I get the feeling that the question, "What's that thar file with all the funky colors called "normalmap" there for?" isn't really answered. Thus, I propose readding this (slightly changed) paragraph to the article:

In most normalmaps, when viewed as color textures, the red channel should be the relief of the material when lit from the right, the green channel should be the relief of the material when lit from below, and the blue channel should be the relief of the material when lit from the front(practically, full except on the "slopes").

TwelveBaud (what?what'd I do?) 20:37, 20 October 2008 (UTC)[reply]

Normal mapping on PS2[edit]

Normal mapping was actually achieved on the PS2 by employing some clever math and the texture blending hardware (after sony had said it couldn't be done). This was pioneered by Morten Mikkelsen, then working at IO Interactive. — Preceding unsigned comment added by 87.52.33.139 (talk) 19:34, 24 August 2011 (UTC)[reply]


Missunderstandable calculation in "How it works" ?[edit]

In the example of calculating the normals, it is said that

({0.3, 0.4, –0.866}/2+{0.5, 0.5, 0.5})*255={0.15+0.5, 0.2+0.5, 0.433+0.5}*255

that is mathematically not correct (the flipping of the z coordinate has been done, which breaks the equality at this place). I would suggest, that it is just flipped at the end?!

The whole section is very poor in my opinion. It gives a very specific, yet confusing, explanation of how a normal map might be used in an application. I suggest replacing the section with something more general, which doesn't depend on surface shading nor normal map storage specifics. Lord Crc (talk) 10:42, 18 November 2011 (UTC)[reply]

Texture[edit]

Isn't it just a texture? What is the difference between normal mapping and texture? — Preceding unsigned comment added by 2A01:119F:21D:7900:D59F:E8DC:29EE:E5D (talk) 19:28, 4 May 2017 (UTC)[reply]

Interpreting tangent space normal maps[edit]

This whole section was full of errors: tangent space normal maps to do not tend to anything other than 0.5,0.5,1 (the original author was probably thinking of object space normal maps, but even then, it depends on the precise UV mapping used), and tangent space normal maps are not vectors to be interpreted in either camera space or UV ("texture") space, but, naturally, in tangent space. Tangent space normal maps may point away from the surface normal (again, not the same thing as pointing away from the camera, the camera has nothing to do with any kind of normal map vectors) although that is unlikely. After deleting every statement that was false, there was nothing left, so I deleted the whole heading. 73.83.176.33 (talk) 23:55, 20 June 2020 (UTC)[reply]

Normal Mapping on PS2 Misconception[edit]

The section here claims normal mapping is simulated on ps2 using its VUs. However, this isn't how it's achieved. Technically, there are many ways, and the VUs could be used to do the effect, like how it's done ray tracing, before, in a demonstration (google it). However, the truth is that they blend vertex colors, or the RGB colors of a lightmap, over the colors of the normal maps, then go to grayscale, so you see only luminosity values, or lighting, which is then multiplied by the traditional lighting that includes the core lighting color, etc.

In fact, in any art program that has an exclusion blend mode, you can just throw an RGB lightmap, like the one used in the paper for doing the effect on ps2, over a normal map image, in said program, and move it around, to see the effect working in real-time. Do a type of turn to grayscale, and you'll see exactly what I mean. Its per-vertex, per-screen & per-light/poly, vs per-pixel, per-pixel & per-light/on-screen, thanks to the fact per-screen palette blending would blend all the colors of any non-overlapping light over the colors of the normal maps, all at once.

Instead, the ps2 has a color table system, which can be used to make palette FX, during full-screen passes, when everything's texels, since they work with texels, allowing custom mock palette blending multiple passes' colors (how multiply blending is achieved in SWAT: GST for the ps2). This is similar to old DOS 3d games' palette effects or palette blending, using color look-up tables, used for mock alpha, etc., only, this time, they index each color component, and assign them each a palette, each of R, G & B, or even also A, to achieve said results.

It is basically a higher quality RGB version of the same thing, and thus like a normal blend mode, only, thanks to using indexing, it's faster, since most is baked into a little table, like how VQ texture's decompression speeds surpass that of most other compression techniques' decompression speeds, thanks to using a look-up table for the decompression process. The idea is similar, and allows quick custom mock blend modes, which is what allows effects like thermal, close to how they work, using a pixel shader, on Splinter Cell Chaos Theory for the ps2.

So, the main idea is that pixel shading was an upgrade to palette effects, going from per-screen to per-pixel, and, to achieve much of the basics, you have to rely on palette blending and/or image buffers. And, yes, it is documented that Sony intended, all along, that SFX be created via blending 2 or more passes, using VRAM pixel color writing operations, etc. This is where ps2 becomes like an old school platform, and is why it was equipped with drawing rates closer to a powerful SGI workstations, having up to 48GB/sec for pixel color writing bandwidth.

Lastly, Path of Neo, itself, uses the 2 pass verity, using vertex-colors, and, when all enemy NPCs are gone, even with tons of non-enemy NPCs left over, it ran normally always at 30fps, even in big areas. Gameplay is a separate team, so gameplay complexity doesn't hinder your graphical potential, and vice versa. The issue was with the havoc engine, and how it was adapted to the ps2, acording to some people. But, PoN shows the efficiency possible, on the ps2. It could be then as expensive to the ps2 as the per-pixel stuff is to the Xbox, when done right, in the end.

--SilenceoftheHills (talk) 13:23, 12 April 2022 (UTC)[reply]