Thread: 68k details
View Single Post
Old 10 November 2018, 14:32   #744
Registered User
Join Date: Nov 2006
Location: Stockholm, Sweden
Posts: 224
Originally Posted by litwr View Post
Thank you very much again for your so descriptive comments. Indeed 68k is at least 50% better than 8088. It has its advantages, x86 has its own. My point about relocatable code is clear enough, it is only subset of not relocatable. You can't use some 68k addressing with it. You have to replace some instructions with slower and sometime larger equivalents. 8088 on the contary can use all its capabilities with any code limited in size of 64 KB (+192 KB for data) and get free relocatability for it.
Yes - however, you get the full convenience in 8088 up to only 64kB to data. As you go above 64kB (not all data fits into a single 64kB data segment) it gradually gets more and more complicated. Which segment will you place a particular data structure in? Should this group of of data structures be in the saem segment? Do I want to copy things using the string operations between data structures in different data segments? between data structures in the same segment?

Originally Posted by litwr View Post
Indeed 68000 is better for large program than 8086. But such programs were rather rare for 80s and 80386 provided the way for convinient coding without segmentation.
"such programs were rare for 80s"? Probably yes in the early-to-mid 80s. Toward the very end of the 80s, and in the 1990s, I disagree.

The 68000 provided a platform which enabled hardware creators to create OSes with significant more flexibility than the 8088-based OSes. It was only by the arrival of Windows 95 that there was a common OS that would support running multiple applications and also support a single application using all the available memory in the machine. For someone living in Sweden, using and developing on PCs during 1990-1995 was a jumbled mess of niche OSes (OS/2 & Linux) and weird DOS extensions (DOS/4GW, PMODE, code-it-yourself) until Windows 95 finally arrived with a widely-available and consistent platform just like Amiga OS had provided. Now, why did it take so long? I doubt it was because it 'wasn't needed'. I think part of the reason for the long delay for common 32-bit application support on PC was because the 8088 programming model did not offer a clean gradual upgrade path to the IA32 model. Windows 3.x did the 286-protected-mode thing to support running multiple applications, and getting 32-bit support in Windows 95 required a huge overhaul of the OS and the application programming models.

My estimation here is that the 8088 programming model served the actual computing needs from its release in 1978-or-so up until 1990.

The 68000 programming model, on the other hand, acting like a 32-bit processor and with a broader register set, served the actual computing needs from 1980-or-so up until 2000. The first two big features missing from the original 68000 chip were floating-point computation and virtualized addressing - but introducing hardware support for these in later 68k chips did not require significant changes to the programming model itself. Around the year 2000 people would have been needing SIMD instructions. Around the year 2005 people would have been needing 64-bit address spaces. That shift to 64-bit addressing would have made the most sense by shifting the entire processor architecture to 64 bits, and that 32bit->64bit shift for the 68000 architecture would have been similar in nature and scope to the IA32->x86-64 shift.

I think it is impressive for the 68000 architecture to have been relevant for the actual computing needs for 20+ years. I am a bit saddened that the business side of things made x86-based architectures win in the end.

Last edited by Kalms; 10 November 2018 at 14:45.
Kalms is offline  
Page generated in 0.04435 seconds with 11 queries