View Single Post
Old 21 May 2016, 01:08   #10
Registered User
Join Date: Jan 2010
Location: Kansas
Posts: 873
Originally Posted by idrougge View Post
</end matthey off-topic rant>
It is good to know who the bad players are (not always clear) who will try to sabotage or road block your efforts before you begin. Yes, I would likely get censored on certain other web sites for saying something similar. Hmm, actually I already did.

Originally Posted by Heiroglyph View Post
My plan is to start by patching in anyway.

I assumed I'd start one entry point at a time.
Write a work-alike (as much as is feasible)
Run a test app that makes some test calls, patches in my replacement, then makes the same test calls again and compare the result.

That will pretty quickly tell me where there is internal data and I can potentially put off the chipset functions until later.
Sounds reasonable. Some of the internal data accesses are not so hidden. Most undocumented LVO functions are known and setpatches can be watched even if a pain. I don't think the newer AmigaOS libraries will be too much of a problem.

Originally Posted by Heiroglyph View Post
The chipset parts are mostly available in Aros from what I understand though, they're just slow. They might want some of this code if we can improve 68k performance. It is the only platform I'm targeting with this, so it will have to be reasonably fast or else be rewritten.
It remains to be seen how well you could do without assembler. Optimizing with 68k assembler probably could do better than the AmigaOS graphics.library which is probably some C and 68000 optimized code. Compatibility is the hard part. I haven't done much Amiga custom chip programming so I can't help you much there.

Originally Posted by Heiroglyph View Post
I want to stick with C because ASM isn't clear enough for most people to contribute or even just read.
68k assembler is easy to read and compact but it is not as structured or forgiving as C. It took me longer to learn C than 68k assembler but then there is more to know with C and some of it is hidden.

Originally Posted by Heiroglyph View Post
I'm confident that it will be fast enough for the most part with only certain sections needing optimizations in assembly.

Is there a compiler that you feel produces better code and is still well supported?

If not, I like the idea of vbcc because it's better integrated with the platform, still developed, open source and the developers are a known quantity.
GCC 2.95.3 gives better integer code quality on the 68k usually. It is not compatible with modern GCC, doesn't fully support C99 and the FPU support is basic. I would use vbcc also. If you stick with C99, then it should be compatible with GCC and other compilers. There are likely to be some improvements in code quality even if the 68k backend doesn't get much attention (what I expect but I hope I'm wrong). The active support is great and more important than code quality at this point. Vbcc could give worse code quality. It is right there with SAS/C and GCC 3.x depending on the code. It does best with 32 bit integer datatypes while doing a poor job of trying to promote most smaller datatypes to 32 bits. Promoting integers is good for most modern processors, including the 68060 sometimes, but is sometimes slower and generates larger code on the 68k (the 68020/68030/68060 are slowed by fetching larger instructions). The later ColdFire received the instructions needed to efficiently promote everything but even then dropping .w and .b sizes reduced the performance compared to the 68060. The ColdFire v5 is only faster than the 68060 because it is clocked higher.
matthey is offline  
Page generated in 0.05962 seconds with 9 queries