14 October 2018, 19:42  #1 
Registered User
Join Date: Dec 2010
Location: Athens/Greece
Age: 47
Posts: 448

Coding a 3d world  Advice needed
I am trying to figure out how to do 3d transformations and I would really like some help from you guys that have played with these things before.
Ok, after researching (google) this a bit, I think the order is ModelTransformation, ViewTransformation, ProjectionTransformation. So, for each of your objects (models) you do Scale, Rotation, Translation. This is ModelTransformation. Optimisation: since sin/cos are involved in this, and all the points of the same object have the same rotationx,rotationy and rotationz, you grab 6 sin/cos before the objectpointsloop and instead of N*6 sin/cos you use 6 sin/cos per object. I have working C code that does that and then a basic projection from x,y,z to x.y and draws the objects and appears to work fine. (wireframe) Since I don't do ViewTransformation, I think that implies that the camera is at 0,0,0 "straight" looking down the z. (right?) My first problem is how to introduce the camera in the code. (Think Elitespacegamelike camera, so the camera is at your ship's x,y,z and has the orientation of the ship) Do you guys use matrices for all that? 3x3 or 4x4? My second problem is visibility. As it is now my code draws all linesfaces. So, I think I need normals for that. Is it cheaper to have precalc normal for each face, which you rotate in modeltransformation and then in viewtransformation? Or calculate normal after all the transformations from 3 points in the face? And then what? Dotproduct of normal with vector camera>somepointofface and check sign? In order to parse and hold the 3dmodels I found, I came up with the following data structures. Code:
struct vec { float x; float y; float z; int screen_x; int screen_y; }; // Each object has a vertices list struct vertlist { int size; struct vec *points; }; // Each object has many polygons struct poly { int psize; //num of polygons, say N int *size; //size[N], keeps number of vertices for each polygon int **index; // vertice index index[polygonN][1..size[N]] }; struct obj { struct vec pos; struct vec speed; struct vec rot; struct vertlist *vlist; struct poly *polys; }; * db.c source code * db amiga executable (compiled like: m68kamigaosgcc noixemul O3 o db db.c lm) * COBRAMK3.X and CORIOLIS.X, 3d models from original elite the thing runs for 200 screen updates, takes about 8.16 seconds in emulated A1200 I plan to migrate to fixedpoint and lookup tables for sin/cos at the end. Btw, if you happen to know an existing tutorial on all this please share the url. 
14 October 2018, 23:04  #2  
Registered User
Join Date: Jul 2018
Location: Londonish / UK
Posts: 109

Quote:
1: precalculate a unit normal for each face which gets rotated and translated with the model, then look at the sign of the Z component to determine visibility 2: just calculate the z part of the normal of faces once they’re transformed. The calculation of this turns out to be the same as calculating the area under the 2D projection of the face, so you sometimes see it described as that in conversations about winding order. I’m currently using option 2 as it’s obviously quicker, but option 1 can give you opportunities to use that unit normal for other things, such as shading. EDIT: Probably not told you anything you didn’t already know, but I felt like sharing. Last edited by deimos; 14 October 2018 at 23:14. 

14 October 2018, 23:18  #3 
Registered User
Join Date: Dec 2010
Location: Athens/Greece
Age: 47
Posts: 448


15 October 2018, 10:33  #4 
Registered User
Join Date: Sep 2011
Location: Paris/France
Posts: 186

hello
You should learn how to use matrices As it is the way to combinate "transformations" like ModelTransformation with ViewTransformation by multiplying matrices Then this ModelViewTransformation Matrix is used to transform the vertices About the camera : it need to generate a matrix for view transformation: in OpenGL there is a very simple function called gluLookAt for doing that so just search for the source of this function in several packages like StormMesa,MiniGL,Kazmath /*=================================================================*/ inline void MultM(register float *M1,register float *M2,float *M3) { float M[16]; M[0 ]= M1[0]*M2[0] + M1[1]*M2[4] + M1[2]*M2[8] + M1[3]*M2[12]; M[1 ]= M1[0]*M2[1] + M1[1]*M2[5] + M1[2]*M2[9] + M1[3]*M2[13]; M[2 ]= M1[0]*M2[2] + M1[1]*M2[6] + M1[2]*M2[10] + M1[3]*M2[14]; M[3 ]= M1[0]*M2[3] + M1[1]*M2[7] + M1[2]*M2[11] + M1[3]*M2[15]; M[4 ]= M1[4]*M2[0] + M1[5]*M2[4] + M1[6]*M2[8] + M1[7]*M2[12]; M[5 ]= M1[4]*M2[1] + M1[5]*M2[5] + M1[6]*M2[9] + M1[7]*M2[13]; M[6 ]= M1[4]*M2[2] + M1[5]*M2[6] + M1[6]*M2[10] + M1[7]*M2[14]; M[7 ]= M1[4]*M2[3] + M1[5]*M2[7] + M1[6]*M2[11] + M1[7]*M2[15]; M[8 ]= M1[8]*M2[0] + M1[9]*M2[4] + M1[10]*M2[8] + M1[11]*M2[12]; M[9 ]= M1[8]*M2[1] + M1[9]*M2[5] + M1[10]*M2[9] + M1[11]*M2[13]; M[10]= M1[8]*M2[2] + M1[9]*M2[6] + M1[10]*M2[10] + M1[11]*M2[14]; M[11]= M1[8]*M2[3] + M1[9]*M2[7] + M1[10]*M2[11] + M1[11]*M2[15]; M[12]= M1[12]*M2[0] + M1[13]*M2[4] + M1[14]*M2[8] + M1[15]*M2[12]; M[13]= M1[12]*M2[1] + M1[13]*M2[5] + M1[14]*M2[9] + M1[15]*M2[13]; M[14]= M1[12]*M2[2] + M1[13]*M2[6] + M1[14]*M2[10] + M1[15]*M2[14]; M[15]= M1[12]*M2[3] + M1[13]*M2[7] + M1[14]*M2[11] + M1[15]*M2[15]; CopyM(M3,M); } /*=================================================================*/ void CopyTransformVfast(register float *M,Vertex3D* V,Vertex3D* V2,LONG Vnb) { /* copy & transform points with a given matrix */ register float x; register float y; register float z; register ULONG n; while(Vnb) { x=V>x;y=V>y;z=V>z; V2>x= M[0]*x + M[4]*y+ M[8] *z+ M[12]; V2>y= M[1]*x + M[5]*y+ M[9] *z+ M[13]; V2>z= M[2]*x + M[6]*y+ M[10]*z+ M[14]; V++; V2++; Vnb; } 
15 October 2018, 13:29  #5  
Registered User
Join Date: Jul 2018
Location: Londonish / UK
Posts: 109

Quote:
I personally don't use a matrix for the final perspective transform, it seems easier not to. So, assuming your models are to the same scale as your world, then you just need to rotatethentranslate them for your modeltoworld transform, and then translatethenrotate them for your worldtocamera transform. There are different ways to calculate rotation matrices. I like to think in terms of roll, pitch and yaw. In terms of left hand coordinates, rotation around X is pitch, Y is yaw, Z is roll. There's a specific order in which you can combine them and they work sensibly for little spaceships. For your camera you may prefer the gluLookAt style mentioned by thellier, or even have several options so that you can switch views or have special viewpoints for different situations  often games will pan around your craft as it explodes when the game ends, for instance. 

15 October 2018, 13:32  #6  
Registered User
Join Date: Apr 2018
Location: Stockholm / Sweden
Posts: 12

Quote:
You can do backface culling (BFC) either in eye coordinates, or in window coordinates. BFC in eye coordinates: Let N be the normal vector of the polygon (in eyecoordinates), and P be any vertex of the polygon (in eyecoordinates). If the dot product N * P > 0 then the back of the polygon faces the camera, and the polygon should not be drawn. BFC in window coordinates: I'll explain this method in a little more detail, because it might be interesting to understand what is going on. Consider the second image from the top on this page: https://www.scratchapixel.com/lesson...toknowfirst. It shows how the perspective matrix, together with the w divide, transforms vertices in the view frustrum to the canonical view volume (normalized device coordinates). Note that when a polygon has been transformed to the canonical view volume it is possible to determine if the polygon is facing front or back by considering only the zcoordinate of the normal vector. The normal vector is calculated as the cross product (v2v1) * (v3v1), where v1, v2, v3 are three vertices in the polygon, with the suitable winding (and * in that equation is the cross product). By looking at the equations for the cross product we see that we get the zcoordinate as: N.z = (v2.x  v1.x) * (v3.y  v1.y)  (v2.y  v1.y) * (v3.x  v1.x). Often we don't calculate the normalized device coordinates explicitly, but instead do the transformation directly to window coordinates, where the z coordinate is also dropped. However, if you look at the equation for N.z above, it is invariant to the scaling which is performed to transform between normalized device coordinates and window coordinates, and the equation also references only x and y coordinates of the vertices. To summarize the "BFC in window coordinates" method you do: transform the vertices all the way to window coordinates (including perspective divide). Then calculate N.z = (v2.x  v1.x) * (v3.y  v1.y)  (v2.y  v1.y) * (v3.x  v1.x), and look at the sign of N.z to determine if the back or front of the polygon is facing the camera. Which method is more efficient depends. If you need to calculate normal vectors for some other reason than BFC (e.g. for lighting calculations) then the eye coordinates method may be more efficient. But more likely the window coordinates method is cheaper. 

Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)  
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Help & Advice needed!  Stamperton  MarketPlace  5  16 October 2017 21:47 
A big hello and some advice needed.  Wonderer  New to Emulation or Amiga scene  2  21 February 2014 20:34 
once again advice needed please!!  bowfell  Amiga scene  3  24 August 2010 22:49 
A2000 Advice needed  Andec  support.Hardware  11  17 September 2008 12:36 
Amiga advice needed!  Macca  Amiga scene  34  09 June 2007 21:56 

