Quote:
Originally Posted by matthey
(Post 1154112)
FORTRAN and BLISS languages had optimizing compilers much earlier. C compilers did not do much optimizing early on because C is a low level language and further optimization was thought unnecessary and slowed compile times.
|
Last year I read the book "Expert C programming: Deep C secrets' by Peter van der Linden, which goes into great detail how the language design and the compiler evolved in tandem. The compiler was designed to be very compact, which, for example, explains why 'C' uses so very few reserved keywords and reuses them for different purposes (such as "static"). This is also why the more complex error checking operations were split off into the separate "lint" command.
In order to make the compiler run fast, it had to load fast, too, and this put pressure on the language design to make it simpler and more compact. I reckon that the original portable 'C' compiler was not spending too much effort on optimizing the code.
Quote:
Programmers were supposed to be able to write efficient low level code for the CPU/hardware (Dennis Ritchie created C to better take advantage of the PDP-11 features, the CPU which heavily influenced the 68k).
|
I read Lions' commentary on Unix 6th edition when it was reissued in the late 1990ies: while the code had been edited for clarity and readability, it's hard to deny that it looks like it was hand-optimized, and then some. Very lean code - if only things could have stayed this straightforward ;)
Quote:
At the same time, C portability became more important than any previous language and the code could be optimized for different hardware. With portability came the need for more abstraction from the hardware and more of a need for optimization to give back performance which was lost in the process. This is the 2 headed monster of C with abstraction currently winning and low level coding suffering. I had a recent related e-mail conversation with Frank Wille (with comments from Dr Volker Barthelman). I was complaining about the vbcc 68k code generation for a simple and common C code which Dennis Ritchie himself had used in a book.
Code:
/*
* Source: Kernighan & Ritchie, The C Programming
* Language, 2nd Edition, Prentice Hall PTR, 1988,
* p. 105
*
* strcpy: copy t to s; pointer version 2
*/
void strcpy(char *s, char *t)
{
while ((*s++ = *t++) != '\0')
;
}
The vbcc generated 68k code needs 18 instructions and spills 2 registers while optimal assembler code (obeying the 68k AT&T ABI) is 6 instructions with no register spills. Volker's translated comment was something like, "In the tests I did, this case occured very rarely (and today most coding guide lines would forbid something like that)." It's almost like we have 2 different C languages except that the abstracted and highly optimized version of the language seems to have won out but sometimes at a cost. Most old non-optimizing C compilers would do a better job with this code.
|
The old generation of 'C' compilers seems to have died out in the 1990'ies, eventually. For the Amiga Manx Aztec 'C' and Matt Dillon's compiler for DICE seem to have been the last generation of this branch.
But let's not overgeneralize: some optimizing 'C' compilers still generate cracking good assembly language code. It seems that the quality is somewhat related to how well the back-ends for the respective processor family are maintained. The MC68000 family is not getting the kind of attention it used to receive some 25+ years ago.
Quote:
Right. It was not possible with what was available. If the Green Hill's compiler was doing a good job of function inlining, a reg arg ABI will not give much performance. A study I read shows about 1% performance gain for a reg arg ABI with heavy function inlining using GCC and about 7% with function inlining turned off. Of course functions in an Amiga shared library can't be inlined so the 7% might give an idea of the gain from getting rid of the stubs. This is not much for a non-CPU intensive library but would be nice to have as the work is supposedly already done and it would save some kickstart space as well.
|
It does save a lot of ROM space if you build intuition.library with SAS/C, using register parameter passing and keeping function inlining to a minimum. More space yet can be saved if you build the library for the 68020 instruction set. Even with the most basic optimizations enabled the code becomes more compact and faster.
It has been awhile since I looked at intuition.library. I recall that Intuition would not benefit greatly from inlining, flow analysis, etc. because it is fundamentally not built around algorithms and data structures which would. Singly-linked lists are the "backbone" of almost all the linked data structures which Intuition uses. Because there are no accessor functions or APIs for stringing them together or for processing them, scalability is very limited. The kind of optimizations which a compiler would bring to the table will not help you to conquer these limitations.
Quote:
Why wasn't the new intuition.library ever released (or was it in the 3.1 ROM but is still poorly optimized)?
|
Because Commodore went out of business before the major changes in Kickstart V42 could have been more widely tested or sold.
I checked the code again, and it turns out that the last changes made to to intuition.library were performed by David Junod (not by Peter Cherna) in February 1994. At this time Commodore was only some three months away from bancruptcy.