View Single Post
Old 17 September 2015, 20:46   #184
Registered User
Join Date: Jan 2010
Location: Kansas
Posts: 873
Originally Posted by ReadOnlyCat View Post
Thanks for the detailed explanation. This makes a lot of sense.
I assume that the vbcc backends are mostly code and are not data driven?
The backend deals with data mostly. The C code has already been parsed and put into custom data structures by the time it reaches the backend. A good understanding of the custom structures is necessary to understand how the backend works. The backend itself is mostly code which processes the data generated by the frontend. The vbcc manual gives information on writing a backend.

The current source code with 68k backend is at the following link.

The code is complex and intricate but it is well organized and readable (some German comments though). There are few dependencies and the source compiles on my high end Amiga with vbcc. Look at and try to compile the GCC source code in comparison some time.

Originally Posted by ReadOnlyCat View Post
You mentioned that the 68k's CC behavior can prevent some peephole optimizations to be applied but could this be worked around by adding live analysis to the code right after the peephole window? If it can be determined that relevant bits of the CC are trashed a few instructions after the peephole window then the optimization can be applied (provided the possibility of branching is taken care of).

I have not looked at the source code so maybe this is already done to some extent or not doable with the current architecture, I guess only Phx could tell us.
Vasm could do multi-instruction optimizations but this gets to be resource hungry (CPU and memory). Vasm is already fairly resource hungry being written in C and being cross-platform. Frank has told me before that he does not want to add multi-instruction optimizations at this time. Vasm can and does generate multiple instructions from a single instruction peephole optimizations though.

It may make more sense to move multi-instruction optimizations to the backend or possibly a new 68k instruction scheduler. Multi-instruction analysis is already done in a backend or an instruction scheduler so it makes more sense to me than the assembler. Volker and Frank may have different opinions though.

Originally Posted by ReadOnlyCat View Post
I must say that aside from memory, network and storage expansions, new hardware leaves me cold (to the exception of console-devkit-like instrumenting/debugging hardware if it existed). My Amiga is a retro coding/gaming only platform and if I want to use other software I use a modern kitten.

I am willing to bet new retro users will think similarly.
The old hardware is very retro cool but the I/O connections are almost all outdated and the hardware is getting old. I never want to see a floppy again. I don't know if it's the humidity where I live or what but I have had horrible luck with floppies and floppy drives. We need support for modern media cards, USB and ethernet at least. Support for SATA and HDMI/DVI isn't necessary yet but would be awesome. The FPGA Arcade and Mist are consumer retro hardware chameleons but are not designed as an Amiga upgrade. The Natami was designed from the ground up as an Amiga enhancement and shows that new FPGA hardware can be exciting and awesome. I consider it 3rd generation FPGA hardware which was ahead of its time and which made it difficult to bring out. Most of the FPGA hardware out like the FPGA Arcade and Mist I consider to be 2nd generation. If this generation of FPGA hardware is successful then we will likely see 3rd gen FPGA hardware with larger and faster FPGAs. Majsta's new accelerator will have a more powerful FPGA already.
matthey is offline  
Page generated in 0.06905 seconds with 9 queries