16 January 2017, 10:01 | #41 | |
Computer Nerd
Join Date: Sep 2007
Location: Rotterdam/Netherlands
Age: 47
Posts: 3,751
|
Quote:
|
|
16 January 2017, 10:08 | #42 | |
Total Chaos forever!
Join Date: Aug 2007
Location: Waterville, MN, USA
Age: 49
Posts: 2,186
|
Quote:
As for compilers, they are so complex that they are not profitable business so reusable code is readily available in the form of open-source. |
|
16 January 2017, 10:14 | #43 | |
son of 68k
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,323
|
Quote:
This doesn't change the value instructions can have. |
|
29 January 2017, 16:22 | #44 |
Registered User
Join Date: Mar 2016
Location: Ozherele
Posts: 229
|
I have tested pi-spigot with IBM PC 386DX hardware and found out that my estimations were slightly wrong. So 14MHz 68020 at Amiga-1200 is still a bit faster than 12.5 MHz 80386. However it was mentioned that FS-UAE is not too accurate with 68020. I am curious to find out its level of accuracy and have started a new thread - http://eab.abime.net/showthread.php?t=85750.
The project tables are updated - http://litwr2.atspace.eu/pi/pi-spigot-benchmark.html |
29 January 2017, 16:34 | #45 |
son of 68k
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,323
|
What would be interesting (at least for me) is trying, for every machine, to get the smallest possible executable performing that computation.
In your page you give the executable sizes, but they are quite a lot higher than the minimum that can be done. In this "code density" challenge, old machines would stand their chance against modern ones. |
29 January 2017, 17:16 | #46 |
Registered User
Join Date: Mar 2016
Location: Ozherele
Posts: 229
|
My priorities are:
1) the faster speed; 2) more digits; 3) a program size. The size of UI is bigger than the calculation code for CPU with hardware division. |
01 February 2017, 16:12 | #47 | |
Registered User
Join Date: May 2014
Location: inside the emulator
Posts: 377
|
Quote:
Even now with little change in operating frequencies an ASIC isn't likely to limit peak performance in order to optimize something like multiplication. |
|
01 February 2017, 16:34 | #48 | |
Registered User
Join Date: May 2014
Location: inside the emulator
Posts: 377
|
Quote:
What 68010 bug made a comeback BTW? |
|
01 February 2017, 19:56 | #49 |
Computer Nerd
Join Date: Sep 2007
Location: Rotterdam/Netherlands
Age: 47
Posts: 3,751
|
|
01 February 2017, 20:08 | #50 | |
son of 68k
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,323
|
Right if you don't consider the coldfire as a 68k. But this encoding space was for mvs/mvz, two useful additions i would have liked to have in a future cpu.
Note that it's not exactly never either. Early 68000 masks had dcnt (ancestor of current dbcc) instruction here It's a 68000 bug, not 68010. This is 68000 move sr bug that was fixed in 68010. This bug is bad for virtualization. Pretty much unimportant for current uses, but not nice if you plan for some distant future. Quote:
I'd like to see that, especially from the code density point of view. Perhaps i'm alone... wouldn't be the first time I've searched the net but so far i've never found code density comparisons of many cpus with real source code... |
|
01 February 2017, 20:43 | #51 | |||
Banned
Join Date: Jan 2010
Location: Kansas
Posts: 1,284
|
Quote:
Code:
lea (4,A0,D0),A1 ; 3 additions lea (4,A2,D1),A3 ; 3 additions Code:
add.l {4,A0,D0},A1 ; 4 additions add.l {4,A2,D1},A3 ; 4 additions add.l {4,A4,D2},A5 ; 4 additions I would expect that making 32x32=32 single cycle may limit the maximum clock rate. That is not necessarily a problem if the target is embedded and maximum performance per MHz per core is wanted. It is simpler (saves logic) to have single cycle instructions. Quote:
Quote:
SPARC16: A new compression approach for the SPARC architecture https://www.researchgate.net/profile...chitecture.pdf There was a more recent "attempt" which included 8 bit CPUs but only used a tiny hand optimized assembler program with mostly byte sized data (text). I downloaded the assembler program for the 68k and wrote the author to tell him he would be better off using a compiler for the 68k even though they generate horrible code too. I told him his whole study was majorly flawed and should be taken down as it is misinformation (it looks like an official study by a college student but it is complete rubbish). He went all defensive so I don't think he changed anything either. His background was a DOS x86 programmer as I recall and the 8086 did finish with the best code density (it would be good for tiny text programs). This attempt is really laughable as it compared his cross platform assembler programming skills which are poor. Code Density Concerns for New Architectures http://web.eece.maine.edu/~vweaver/p...09_density.pdf It would be nice to see a good code density comparison of many processors but it is inherent to flaws. This litwr pi program uses OS code which reduces the executable size. It would be better to write the characters to a buffer in memory and exclude the printing support code for code density calculations. The time should also only measure the time necessary to write to the buffer and not to print. The result could be null terminated and printed afterward though. There are still several potential flaws like the over importance of division instruction performance (as litwr noted) and the tiny program size being non-representative of general CPU performance. Still it would be interesting knowing these caveats. Last edited by matthey; 01 February 2017 at 22:33. |
|||
02 February 2017, 10:36 | #52 |
Computer Nerd
Join Date: Sep 2007
Location: Rotterdam/Netherlands
Age: 47
Posts: 3,751
|
|
02 February 2017, 12:36 | #53 | |
Registered User
Join Date: May 2014
Location: inside the emulator
Posts: 377
|
Quote:
BTW: Official study? Are you really that clueless of how academic studies are done?!? There are no official studies - studies are done and published by individuals and/or groups. If you (or anybody else) don't like it then you publish something yourself - it is that simple. One does not go out bad mouthing the study and whine in public how bad it was done (especially with not proof) and call the authors skills into question. Really, I thought better of you. This behavior is fitting an immature crank! P.S. 8086 have very good code density for some tasks, the fact that instructions can be pretty complex while being one byte in size makes a huge difference. Don't like it? Grow up. P.S.^2 Yes I'm pissed off. |
|
02 February 2017, 14:54 | #54 | |
son of 68k
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,323
|
Quote:
The best thing would be to recruit asm programmers for every architecture and so they would write the shortest code for each. Speed comparison of 6502/z80 vs 68000 wouldn't be very meaningful... However in matter of short code they are competitive. |
|
02 February 2017, 15:57 | #55 | |
Computer Nerd
Join Date: Sep 2007
Location: Rotterdam/Netherlands
Age: 47
Posts: 3,751
|
Quote:
Yes, but why does that matter? Speed is far more useful than code size. |
|
02 February 2017, 16:07 | #56 | ||
son of 68k
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,323
|
Quote:
Quote:
Anyway I just like short code |
||
02 February 2017, 16:11 | #57 | |||||
Banned
Join Date: Jan 2010
Location: Kansas
Posts: 1,284
|
Quote:
Quote:
Quote:
Quote:
Quote:
I watched a Bill Nye show on climate change a few months back and they brought up a graph for a few seconds with the PPM of CO2 over time. I said out loud, "wait a second, that graph didn't start at zero". My brother used the DVR to rewind and freeze the frame with the graph. I then looked up a graph which started with zero PPM and it changed the whole picture as I expected. I then found a chart which went back thousands of more years and all of a sudden what looked like a hysteria causing chart became unremarkable. This is a good example of misinformation distorting the truth. It made me lose respect for everyone on that show and involved with that show. I would have demanded they zeroed that chart if i had anything to do with that show. Many people do not understand scientific method, statistics, critical thinking and propaganda and are more easily swayed. I do have a problem with the propaganda propagators, brain washers and deceivers of the world. Sadly, they have caused great evil and loss even in this modern world where we benefit so much from science. |
|||||
02 February 2017, 16:17 | #58 |
Computer Nerd
Join Date: Sep 2007
Location: Rotterdam/Netherlands
Age: 47
Posts: 3,751
|
Except 4 bit CPUs (Intel 4004) Would be interesting to see how much faster later CPUs are.
True. I like fast code |
02 February 2017, 18:04 | #59 | |
Banned
Join Date: Jan 2010
Location: Kansas
Posts: 1,284
|
Quote:
Code density does matter even to modern processors. If you had read the, IMO, good SPARC16 research, it talks about (instruction) cache miss rates. There is a substantial difference in cache miss rates between a low code density RISC CPU and a high code density CISC or compressed RISC (Thumb, Thumb 2, MIPS16(e), MicroMIPS, PPC CodePack, SPARC16, etc) CPU. Reducing the cache misses results in substantially better performance and reduced power consumption. Most of the research has been done on RISC processors with compression. Examples of research study results. 1) Thumb 2 showing 55%-70% compression with 30% speed gain with 16 bit bus and 10% loss with 32 bit bus 2) MicroMIPS provided a 65% compression ratio giving a 2% speedup vs MIPS32 3) A general RISC compression showed a 35% reduction in code size gave a 10% reduction in power consumption and a 20% performance improvement 4) Compiler heuristics that gave an 85% compression ratio improved performance by 17% 5) Link time ARM optimizations reducing code size from 16%-18% provided 8%-17.4% performance gain and 7.9%-16.2% reduction in power consumption References can be found in the Design and evaluation of compact ISA extensions paper which is the 2nd article after the SPARC16: A new compression approach for the SPARC architecture link. There is a wide range of performance for RISC compression which I suspect is due to decreased functionality and increased number of instructions by many of these RISC compression schemes. The compiler and heuristics performance improvements were larger possibly because they did not decrease functionality (introduced limitations) and likely decreased the number of instructions. Adding instructions to an enhanced 68k ISA could increase functionality and decrease the number of instructions as well as improve code density so I would expect to see performance and power consumption improvements at the high range of these study results. Perhaps 68k ISA enhancements could help compilers generate better code which would also improve code density and have synergies with other code density performance improvements. It is sad that an enhanced 68k ISA can naturally have better code density than probably all of these RISC compression schemes with minimal implementation issues yet gets practically no attention or research. The old SPARC16: A new compression approach for the SPARC architecture paper lists the following architectures as having the least code density (starting with worst). 1) Alpha (dead) 2) MIPS (on life support) 3) PowerPC (on life support) 4) SPARC (on life support) 5) ARM original (abandoned/replaced by higher code density Thumb 2) Is it just a coincidence that these low code density architectures are antiquated? PA-RISC, another very low code density RISC architecture which the Amiga was going to use, isn't mentioned either. Does size matter for my girl? I think she would "prefer" short code and my long ... post . Last edited by matthey; 02 February 2017 at 18:29. |
|
03 February 2017, 14:40 | #60 |
Registered User
Join Date: May 2013
Location: Grimstad / Norway
Posts: 839
|
I just recently read a paper on the RISC-V (https://people.eecs.berkeley.edu/~kr...ECS-2016-1.pdf) which has been designed with compression in mind. They discussed (among others) x86 and x64 code size and found it was rather much larger (IMO) than conventional wisdom claims. 68K was not mentioned, but I'm guessing it would begin to look good in that regard.
(My only complaints against the RISC-V would be that it has no big<->little endian instructions or loads/stores, and no load/store multiple.) Last edited by NorthWay; 03 February 2017 at 14:54. Reason: link |
Currently Active Users Viewing This Thread: 1 (0 members and 1 guests) | |
Thread Tools | |
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Maniac Mansion Demo Disk - Onslaught releases the demo of a classic for C64! | Neil79 | Retrogaming General Discussion | 0 | 16 January 2015 10:40 |
Yet another "plz help me find a demo" thead! AGA demo w/ texturemapped building | ruinatokyo | request.Demos | 1 | 26 September 2013 16:29 |
Jason Lowe and The Mathematical Reflex Test | jedk | Retrogaming General Discussion | 5 | 30 January 2013 02:13 |
Old Amiga Demo Wanted -- Music Track "The last ninja demo" | scribbleuk | request.Demos | 13 | 23 April 2012 13:35 |
CU Amiga #75/Simon The Sorcerer Demo + CU Amiga #99/Amazon Queen Demo | MethodGit | request.Old Rare Games | 12 | 16 February 2004 17:16 |
|
|