View Single Post
Old 30 January 2020, 16:36   #18
Thomas Richter
Registered User
 
Join Date: Jan 2019
Location: Germany
Posts: 3,233
Quote:
Originally Posted by gulliver View Post
Following that train of thought:
Then why we have NofastMem?
It is nowadays not a very useful program at all. Applications that require it would typically not run on 3.1 or beyond anyhow, and users that want to run the rest of critical applications have found better ways of solving this problem, as in WHDload. Frankly, the only reason why it is still part of the Os is because it was always there, and it is such a small program that it does not hurt overly.



Quote:
Originally Posted by gulliver View Post
Then why we supported the doomsnd.library workaround?
Why do we even care about fixing bugs others made?
Customer experience is important, yes, but the effort for the audio.device workaround was minimal, whereas the effort for a correct FPU emulation is at least two magnitudes larger in development time. (1 day << 100 days, if not more).


Quote:
Originally Posted by gulliver View Post
Great! Then why not GCC 6.x for 68k that is currently maintained?
Look, I am not trying to stop anyone from attempting this, but if I look at the ratio between effort and result, and I see all the other problems that require a solution, then I come to the conclusion that there are more than enough problems left for the next years before this one becomes relevant.




Quote:
Originally Posted by gulliver View Post

"Difficult" says the man that has a degree in math. Come on Thomas!
Well, then let's look into the past to come to a realistic estimate: Updating the 3.1.4 math libraries from "round to zero" (with all its implications on precision) to "round to nearest, round to even" took about two weeks. Testing and debugging included.


Now, if we do this all again for extended precision, a mathieeeextbas.library would probably require 3 months or so, and then we would only have one out of the multiple rounding modes the 68881/882 support.


Then, we "only" need to emulate the "inexact" bits of the FPU, the exceptions, the comparison modes, including the "unordered" special cases, packed decimal and probably a couple of things I have forgotten.


Thus, if done from scratch, I believe the development time factor compared to the silly audio patch is probably closer to 3 magnitudes than 2.


If we take an existing "softfloat" library (as the one quoted above), then it "only" takes an exception handler. That is smoother, and probably takes "only" two months or so. While doable, the performance of the code will certainly less than ideal, certainly slower than the (already optimized!) math.libraries written in assembly, and certainly completely unsuitable for demo applications.


Frankly, the following solution is a much better one: If you have a serious application, select always the version that works without a FPU - this is typically available. There is no solution for demos or speed critical applications anyhow - a soft FPU will be slow, and a lot slower than using the math.libraries directly.


Thus, there are applications that can be made working without an FPU, and for such applications, we have workable solutions. The second class of applications will not work satisfactory without an FPU anyhow, regardless of whether we have a soft FPU emulation or not.








Quote:
Originally Posted by gulliver View Post


Of course it will be slower. But it is better to have a slow working program than none at all. That is the point.
Not much of a point, really. The typical applications for which it is acceptable to run slow will typically have a non-FPU version anyhow, and the remaining ones do not make sense to run slow either.




Quote:
Originally Posted by gulliver View Post


Besides, many FPU less 060s are clocked at 75mhz, so the speed increase can certainly help ease the pain. Also we are no longer stuck with just 90s accelerators, current ones are much faster.
We are not talking about a a factor of 2 (assuming that some 68060 can even go up to 100Mhz), but more likely about a factor of 20 to 1000, depending on what is exactly emulated, and how well the emulation is optimized. Doubling the speed or not does not quite make a difference between "slow" and "slower". For those applications where we do not have non-FPU versions, it is hardly an acceptable speed decrease.



Quote:
Originally Posted by gulliver View Post


Increasing the list of working programs on Amigas is a plus for any OS.
I cannot really give any numbers, but I would assume that programs that are acceptable to run slow and require a FPU, and no non-FPU version exists are really in the minority.
Thomas Richter is offline  
 
Page generated in 0.51121 seconds with 11 queries