View Single Post
Old 21 August 2015, 04:38   #32
Code Kitten

Join Date: Aug 2015
Location: Montreal/Canadia
Age: 46
Posts: 1,033
Originally Posted by Mrs Beanbag View Post
of course the other way to do it is to pre-calculate all the normals in the object's co-ordinate system, and instead of rotating the normals, transform the camera position by the inverse object transform, and then use that for dot products. That's only 3 multiplies. You can use that also to do light source shading. Rotate the light source instead of the normals, which is one rotation per object and a dot product per face.
In my opinion that is still too much work to do in real time. I would preprocess the objects to produce visibility lists (one bit per face) depending on the angle. This way you only have to compute three dot products per object to determine which list to select and you immediately know which faces you can draw. The cost in memory is minimal (nb-faces bits * nb-view-angles / 8) and the drawing order can probably inferred similarly (4bits per face if restricted to 16 faces).
The visibility list for a 16 faces object would be fetched in a single access.

If an object has a complex geometry, just split it up until the parts can be reasoned about in the same way.

Remember the golden rule: "the fastest computation is the one you don't perform".

Originally Posted by Mrs Beanbag View Post
As for Z-sorting for the draw order of the objects, it might be better to sort by distance (or distance squared) and then do some smooth sort such as insertion sort, which has to do less work the fewer objects change order from one frame to the next (best case O(n)). That way, just spinning the camera round won't lead to any change in the ordering, and as long as object and camera motion is fairly sensible, the draw order won't radically change from one frame to the next so the sorting should stay near-linear.
The same principle can be used here: encode the scene semantically and use this higher level knowledge to avoid computations.
Clustering objects by proximity allows to reduce the range on which to apply sorting: sort the groups by Z squared, then inside each group sort the objects.
Best case stays at O(n), but worst and average cases become much better.
You can also avoid sorting if you have higher level knowledge about object trajectories: if two objects move at constant speed the moment they reach equal Z can be computed in advance. It's a very specific and unlikely example but you get the idea: take one step back to see the bigger picture and you can remove a lot of work.

A bit more on-topic: lately I have been thinking about reducing blitter bandwidth when blitting polygons and Mrs Beanbag's suggestion to reorder the palette struck a chord.
It seems possible to reduce the number of bitplanes to draw by subdividing the screen in horizontal sections, each section with a roughly constant number of colors drawn. Areas with only a few colors drawn can have their palette reordered by the copper to blit on the minimum number of bitplanes possible.
This is not Amiga specific per se but the copper makes this extremely low overhead.

I haven't gotten into detailing the needed computations but I would say that this is at least worth considering.
ReadOnlyCat is offline  
Page generated in 0.03965 seconds with 10 queries