English Amiga Board


Go Back   English Amiga Board > Coders > Coders. General

 
 
Thread Tools
Old 08 November 2021, 18:21   #241
Thomas Richter
Registered User
 
Join Date: Jan 2019
Location: Germany
Posts: 3,216
Quote:
Originally Posted by meynaf View Post
But what codes ? A key press is just a character (usually).
And ANSI codes don't handle keys by position, do they ? How the heck can we then do "WASD" kind of key detection for movement ?
Actually, the codes that ANSI specifies (or a corresponding ECMA standard). These stem from the VT-110 terminal systems, such that CSI A (0x9b 0x41) = ESC [ A means "cursor right". These sequences are shared between VT-terminals, the Amiga and, (as I read it) by ANSI.sys. The Amiga incarnation of ANSI.sys is the "console.device" as it emits and interprets such sequences.


Unfortunately, the console.device only supports a subset of the ANSI sequences, does not support all the equivalences, and some sequences are not quite interpreted according to the standards. (ViNCEd can be switched to support the sequences according to the specs).


On *ix/Linux, it is "termcap" and "curses" that provide an interface towards the implementations of "ANSI sequences".


Quote:
Originally Posted by meynaf View Post
Strange, i always thought the screen's memory buffer was at a fixed address, something like $b8000. The reason why peecees were limited to 640kb.
It has a fixed window within which the VGA places some its memory into the CPU address space, but that does not mean that you cannot choose a start address within, and also move this window within the available VGA memory. VGA memory is traditionally also "segmented", and "paged".


Thus, it is "paged" in the sense that you can switch between 4 pages that either define one of the four bitplanes, or characters/attributes. But you can also select the address within the VGA memory window (text mode does not require a lot of space), but also select which of the (potentially large) VGA memory fits into the (at most) 128K window available for the VGA chipset within the CPU address space.


It's a big pile of mess. Later VGA chipsets allow "linear access", but the early ones don't. There is a reason why there is no P96 driver for the Retina as it only offers this insane "segmented memory model".
Thomas Richter is offline  
Old 08 November 2021, 19:23   #242
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,323
Quote:
Originally Posted by Thomas Richter View Post
Actually, the codes that ANSI specifies (or a corresponding ECMA standard). These stem from the VT-110 terminal systems, such that CSI A (0x9b 0x41) = ESC [ A means "cursor right". These sequences are shared between VT-terminals, the Amiga and, (as I read it) by ANSI.sys. The Amiga incarnation of ANSI.sys is the "console.device" as it emits and interprets such sequences.

Unfortunately, the console.device only supports a subset of the ANSI sequences, does not support all the equivalences, and some sequences are not quite interpreted according to the standards. (ViNCEd can be switched to support the sequences according to the specs).

On *ix/Linux, it is "termcap" and "curses" that provide an interface towards the implementations of "ANSI sequences".
That does not answer the point about positional keypress detection.
In addition, can that system detect a key up ?


Quote:
Originally Posted by Thomas Richter View Post
It has a fixed window within which the VGA places some its memory into the CPU address space, but that does not mean that you cannot choose a start address within, and also move this window within the available VGA memory. VGA memory is traditionally also "segmented", and "paged".

Thus, it is "paged" in the sense that you can switch between 4 pages that either define one of the four bitplanes, or characters/attributes. But you can also select the address within the VGA memory window (text mode does not require a lot of space), but also select which of the (potentially large) VGA memory fits into the (at most) 128K window available for the VGA chipset within the CPU address space.

It's a big pile of mess. Later VGA chipsets allow "linear access", but the early ones don't. There is a reason why there is no P96 driver for the Retina as it only offers this insane "segmented memory model".
So this came with VGA, right ? I mean, it wasn't there from day one ?
meynaf is offline  
Old 08 November 2021, 19:50   #243
Thomas Richter
Registered User
 
Join Date: Jan 2019
Location: Germany
Posts: 3,216
Quote:
Originally Posted by meynaf View Post
That does not answer the point about positional keypress detection.
In addition, can that system detect a key up ?
That's a different level ANSI.sys works on. Frankly, I do not know, but I would guess that there is a Bios function for it.




Quote:
Originally Posted by meynaf View Post
So this came with VGA, right ? I mean, it wasn't there from day one ?
No, actually VESA. VGA modes are stil limited by the memory window. VESA offered a bios interface to set the pages, but also offered a linear window. All the VESA modes were vendor-specific extensions on top of the legacy VGA modes, offering various implementations. VGA is planar, except for the "linked" nibble modes which work by hardware hack by interleaving pages. All the "real" chunky, hi-color and true-color modes are vendor-specific extensions that were (partially) accessed through the vesa bios extensions.
Thomas Richter is offline  
Old 08 November 2021, 20:29   #244
paraj
Registered User
 
paraj's Avatar
 
Join Date: Feb 2017
Location: Denmark
Posts: 1,099
You could not in any way count on ANSI.SYS being loaded (because it took up precious lowmem), and it only works if the software is well-behaved and uses the proper DOS functions for output.

BIOS (and DOS) support for keyboard I/O was very basic and AFAIR you couldn't get key specific key up/down events that way (BIOS interface is just INT 16h and most of those extended functions aren't available for general use).

Real software, except for command line utilities, when needing anything beyond very basic stuff accessed the HW directly. See e.g. the keyboard handling of Wolfenstein: https://github.com/id-Software/wolf3...OLFSRC/ID_IN.C (mouse input is handled using a more proper interface (INT 33h) though)
paraj is offline  
Old 10 November 2021, 19:28   #245
kerravon
Registered User
 
Join Date: Mar 2021
Location: Sydney, Australia
Posts: 184
Quote:
Originally Posted by meynaf View Post
But what codes ? A key press is just a character (usually).
And ANSI codes don't handle keys by position, do they ? How the heck can we then do "WASD" kind of key detection for movement ?
I would think use the W, A, S, D keys for cursor movement would be something that the application is not actually aware of. My interest is clean applications. It is a secondary concern for the OS to be clean.

Quote:
Oh yeah ? Then you have to tell what each one of them is, in detail - so that we're sure we're speaking about the same thing.
The segmentation models, which (as far as I can tell) are a facet of computer science, or even maths, rather than the 8086, are:

tiny - code and data segments are the same and unchanged. this is actually what normal systems like the Amiga use as the only available option.

small - code and data segments are fixed, but different, giving you up to 64k for each address space

medium - code segment register can change (so needs to be reloaded all the time), allowing you to have more than 64k of code. data restricted to 64k.

compact - code segment register doesn't change, so code restricted to 64k, but data register is constantly reloaded allowing more than 64k of data to be addressed.

large - both code and data segment register are constantly reloaded, allowing access to more than 64k of both code and data

huge - same as large, but when data (not code) pointers are manipulated, care is taken to ensure that if a 64k boundary is crossed, the segment register is adjusted, and in either case, the offset is normalized to be as small as possible. This has a large performance overhead and is rarely used, and a popular C compiler, Turbo C, doesn't even support it properly, merely allowing static data to exceed 64k in this model.

This is why it is not necessary (as far as I can tell, from my experience in writing an OS), to plaster your code with near/far pointers. All you need to do is select a memory model appropriate to your requirements. Unfortunately when I was writing PDOS/86, and mostly sharing a code base with PDOS/386 (it was only later with PDOS-generic that I realized the code base could be shared beyond just x86), I only understood up to "large", but actually the job for the OS (but not applications) required "huge".

Quote:
But even chars aren't guaranteed to have fixed size. And inefficient for handling larger datatypes anyway.
chars are guaranteed to have at least 8 bits, and so long as you don't attempt to put more than 8 bits of data into each char, there shouldn't be a problem. I can't think of one, anyway. Yes, it's inefficient, and in the unlikely event that you run a profiler over your code and discover that the bottleneck of your application is in this code, you could go to the effort of having some macros which default to the inefficient manipulation, but can use direct access if you are on the most important platform.

Quote:
Like something more complete feature wise. Say, bitmap graphics, sounds, multitasking, keyboard/mouse/joystick handling, GUI, timers/interrupts, and whatever i forgot about. My own system framework even contains functions for handling dynamic strings/arrays...
Understood. I am hoping that commercial vendors will see a market opportunity for something that satisfies an apparent need as above, and rather than write it from scratch, they can start with the PDOS code base, lowering their development costs, and passing those savings on to the consumer (I believe in capitalism).

Quote:
This is not new, it's called lock/unlock of frame buffer...
But for text-only it seems overkill.
Why do you think it is overkill? It's the proper way to provide a snappy display of a screen, isn't it? If the reason people refuse to use ANSI codes is the failure to be snappy, then fine, lock/unlock a frame buffer to give others an opportunity to convert to ANSI codes.

Quote:
Besides, it is not a proper abstraction for text output (the host would have to do screen format translation if it's not an old PC with text modes).
Not sure what constitutes "proper". It's just one more valid paradigm, isn't it? I'm looking for a solution to get MSDOS programs written in a way that satisfies the main PC market, yet opens up the ability to use the Amiga as an alternative.

Quote:
I rather see things this way : a function which takes x,y position and string as input, shows the text at the given place, and then returns updated x,y position. Or that function would take special value as position to resume from previous one.
Understood. I'm not ready to discuss the world's best screen abstraction paradigm. All I'm looking for is a paradigm acceptable to the PC, where people felt the need to directly manipulate the hardware, for unspecified performance reasons (I believe), yet has necessary "metadata" to allow interpretation on alternate systems.
kerravon is offline  
Old 10 November 2021, 20:18   #246
kerravon
Registered User
 
Join Date: Mar 2021
Location: Sydney, Australia
Posts: 184
Quote:
Originally Posted by Thomas Richter View Post
It's a big pile of mess. Later VGA chipsets allow "linear access", but the early ones don't.
What does this actually mean? PDOS/386 comes with a "vga driver" that directly accesses 0xb8000. I wouldn't have thought it was technically possible to prevent an 80386 from doing that and demanding that it is accessed via segment : offset real mode instead.
kerravon is offline  
Old 10 November 2021, 20:22   #247
kerravon
Registered User
 
Join Date: Mar 2021
Location: Sydney, Australia
Posts: 184
Quote:
Originally Posted by paraj View Post
You could not in any way count on ANSI.SYS being loaded (because it took up precious lowmem), and it only works if the software is well-behaved and uses the proper DOS functions for output.
If the barrier to having portable source code is ANSI.SYS not being always-loaded, then how about adding a flag to MSDOS executables to say "auto load ANSI support"? Perhaps this is the magic bullet that was missing from MSDOS, and if we could have our time again (holding hardware constant), all we need is a governmental mandate that ANSI support be required to allow competition.
kerravon is offline  
Old 10 November 2021, 20:41   #248
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,323
Quote:
Originally Posted by kerravon View Post
I would think use the W, A, S, D keys for cursor movement would be something that the application is not actually aware of. My interest is clean applications. It is a secondary concern for the OS to be clean.
It is not for "cursor" movement, but for player movement in some game. Or for whatever, it does not matter.
A program should have the ability to detect key presses. Or key releases. Or several keys simultaneously.


Quote:
Originally Posted by kerravon View Post
The segmentation models, which (as far as I can tell) are a facet of computer science, or even maths, rather than the 8086, are:
Nope. They're just x86 stuff - and largely obsolete stuff in addition.


Quote:
Originally Posted by kerravon View Post
tiny - code and data segments are the same and unchanged. this is actually what normal systems like the Amiga use as the only available option.
Amiga has nothing such as "code and data segments", so, no.
Compilers can be setup for "small data" model, where a register (typically a4 or a5) is used to access bss section. That's about all.


Quote:
Originally Posted by kerravon View Post
small - code and data segments are fixed, but different, giving you up to 64k for each address space

medium - code segment register can change (so needs to be reloaded all the time), allowing you to have more than 64k of code. data restricted to 64k.

compact - code segment register doesn't change, so code restricted to 64k, but data register is constantly reloaded allowing more than 64k of data to be addressed.

large - both code and data segment register are constantly reloaded, allowing access to more than 64k of both code and data

huge - same as large, but when data (not code) pointers are manipulated, care is taken to ensure that if a 64k boundary is crossed, the segment register is adjusted, and in either case, the offset is normalized to be as small as possible. This has a large performance overhead and is rarely used, and a popular C compiler, Turbo C, doesn't even support it properly, merely allowing static data to exceed 64k in this model.

This is why it is not necessary (as far as I can tell, from my experience in writing an OS), to plaster your code with near/far pointers. All you need to do is select a memory model appropriate to your requirements. Unfortunately when I was writing PDOS/86, and mostly sharing a code base with PDOS/386 (it was only later with PDOS-generic that I realized the code base could be shared beyond just x86), I only understood up to "large", but actually the job for the OS (but not applications) required "huge".
Well i don't know where you fetched all this, but anyway it is clear that segmentation has an impact, whatever efforts done to try to hide it away.
Compare this to 68k where you can access any place, any time, and still use things similar to short pointers (relative accesses).
If you want your system to work on top of true MSDOS, you should perhaps consider going to flat mode.


Quote:
Originally Posted by kerravon View Post
chars are guaranteed to have at least 8 bits, and so long as you don't attempt to put more than 8 bits of data into each char, there shouldn't be a problem. I can't think of one, anyway. Yes, it's inefficient, and in the unlikely event that you run a profiler over your code and discover that the bottleneck of your application is in this code, you could go to the effort of having some macros which default to the inefficient manipulation, but can use direct access if you are on the most important platform.
That's not the point. It makes char type unsuitable for reading bytes from a data stream. There is no "read byte" primitive in C.


Quote:
Originally Posted by kerravon View Post
Understood. I am hoping that commercial vendors will see a market opportunity for something that satisfies an apparent need as above, and rather than write it from scratch, they can start with the PDOS code base, lowering their development costs, and passing those savings on to the consumer (I believe in capitalism).
If you want your system to serve as a basis for something more advanced, you have to design it in a way that will allow this. But apparently you are currently closing the door to even simple graphics...


Quote:
Originally Posted by kerravon View Post
Why do you think it is overkill? It's the proper way to provide a snappy display of a screen, isn't it? If the reason people refuse to use ANSI codes is the failure to be snappy, then fine, lock/unlock a frame buffer to give others an opportunity to convert to ANSI codes.
Text-only display is fast enough, especially on a text screen, so yes, it's overkill.
Besides, flipping frames requires either recomputing whole scene or data copying from previous scene. So you will lose performance for little benefit - if any at all.


Quote:
Originally Posted by kerravon View Post
Not sure what constitutes "proper". It's just one more valid paradigm, isn't it? I'm looking for a solution to get MSDOS programs written in a way that satisfies the main PC market, yet opens up the ability to use the Amiga as an alternative.
A proper abstraction is something that hides the implementation details and therefore allows different implementations to be used.
MSDOS is PC hardware oriented.


Quote:
Originally Posted by kerravon View Post
Understood. I'm not ready to discuss the world's best screen abstraction paradigm. All I'm looking for is a paradigm acceptable to the PC, where people felt the need to directly manipulate the hardware, for unspecified performance reasons (I believe), yet has necessary "metadata" to allow interpretation on alternate systems.
Yes, you are looking for something specific to PC, that can with some efforts and some workaround also work on other systems but guess what, Amiga already has PC-Task for this. We don't need an MSDOS port.

Frankly, fflush(stdout) for screen handling, really ? What if someone redirects output to some file ? Yes, with simple ">file" command. Oh yeah, now the user will see nothing and wonder what happens !
Graphical operations and file operations are two very different things and it is IMO horrible design mistake to have them use same API. Or maybe it's just me, because i can do screen setup with just 4 lines of asm using my system framework.
meynaf is offline  
Old 10 November 2021, 20:49   #249
paraj
Registered User
 
paraj's Avatar
 
Join Date: Feb 2017
Location: Denmark
Posts: 1,099
Quote:
Originally Posted by kerravon View Post
If the barrier to having portable source code is ANSI.SYS not being always-loaded, then how about adding a flag to MSDOS executables to say "auto load ANSI support"? Perhaps this is the magic bullet that was missing from MSDOS, and if we could have our time again (holding hardware constant), all we need is a governmental mandate that ANSI support be required to allow competition.

I was just trying to add some background to the discussion (ANSI.SYS wasn't really used/relied upon, interactive programs didn't use BIOS/DOS for keypresses).


I'm still not 100% sure what you goal is, but if it's to get recompiled well-behaved DOS programs to work, I'd ignore ANSI stuff if I were you.
paraj is offline  
Old 13 November 2021, 01:52   #250
kerravon
Registered User
 
Join Date: Mar 2021
Location: Sydney, Australia
Posts: 184
Quote:
Originally Posted by meynaf View Post
It is not for "cursor" movement, but for player movement in some game. Or for whatever, it does not matter.
A program should have the ability to detect key presses. Or key releases. Or several keys simultaneously.
Again, I'm not trying to solve all the world's problems simultaneously. I'm already happy with C90 programs, I'm pretty happy with the MSDOS API for disk management, and the next step up from that is fullscreen character-based applications (I'm not sure if this is the correct term). And in fact I'd even start with monochrome before moving on to another class - color.

Quote:
Nope. They're just x86 stuff - and largely obsolete stuff in addition.
Nope. It's a generic problem in computer science. If you have programs that are designed to use 64k, and you wish to run them on a new system that has more than 64k of memory, plus allow more programs to be written that take advantage of more than 64k, how do you do it? Without bumping up the register size. Segmentation is the correct technical solution to this problem. If you're happy to just abandon the old software base, then you don't need to be concerned about this, but someone needs to bite the bullet and allow upward compatibility. As for being obsolete - it certainly wasn't obsolete in the time period (80s) that I am interested in).

Quote:
Amiga has nothing such as "code and data segments", so, no.
Nor do tiny mode programs. That's the model used by programs before the segmented machine became available. Restricted to 64k, code and data pointers the same. aka "flat".

Quote:
Well i don't know where you fetched all this,
I explained it in my own words.

Quote:
but anyway it is clear that segmentation has an impact, whatever efforts done to try to hide it away.
At the source code level it can all be hidden away, depending on what you are doing. Yes, it had an impact, but the impact didn't need to be large. printf("hello world\n"); works the same at the source code level in all memory models. Start from that and keep going.

Quote:
Compare this to 68k where you can access any place, any time, and still use things similar to short pointers (relative accesses).
Yes, a brand new architecture. No upward compatibility from anything. Apples and oranges.

Quote:
If you want your system to work on top of true MSDOS, you should perhaps consider going to flat mode.
I'm talking about MSDOS running on an 8086. There is no flat mode, unless you count "tiny".

Quote:
That's not the point. It makes char type unsuitable for reading bytes from a data stream. There is no "read byte" primitive in C.
Can you elaborate on the situation where this is problematic? Let's say you are on a system with 9-bit chars and you wish to store a series of bytes. So long as the high bit is always 0, what's the problem?

Quote:
If you want your system to serve as a basis for something more advanced, you have to design it in a way that will allow this. But apparently you are currently closing the door to even simple graphics...
The disk management and memory management code doesn't get affected by graphics. That can serve as a basis. In addition, one of the things I would like is for PDOS to be able to be used to build a completely new system (even the Amiga itself). For that, you need an operational compiler, editor etc. Doesn't need to be grand.

Quote:
Text-only display is fast enough, especially on a text screen, so yes, it's overkill.
I'm not sure that is true on a PC XT.

Quote:
Besides, flipping frames requires either recomputing whole scene or data copying from previous scene. So you will lose performance for little benefit - if any at all.
In a situation where you don't want to display the screen until it has been completely drawn, there is no overhead. You do all your screen manipulations to a buffer that is not actually in use. When you're ready, you flip to that newly-constructed buffer, zero overhead. Then there is a delay while you copy the entire buffer from one to frame to another. But that delay will be happening at a time when the monkey at the keyboard is deciding which key to press next, so shouldn't be an issue.

Quote:
A proper abstraction is something that hides the implementation details and therefore allows different implementations to be used.
MSDOS is PC hardware oriented.
I'm not convinced that this "PC hardware" was necessary, or fails to be a satisfactory abstraction in and of itself when it comes time to port to the Amiga.

Quote:
Yes, you are looking for something specific to PC, that can with some efforts and some workaround also work on other systems but guess what, Amiga already has PC-Task for this. We don't need an MSDOS port.
I did a search for "PC-Task" and that seems to be an x86 emulator. I'm not after that, I'm after native 680x0 applications.

Quote:
Frankly, fflush(stdout) for screen handling, really ? What if someone redirects output to some file ? Yes, with simple ">file" command. Oh yeah, now the user will see nothing and wonder what happens !
Redirecting a fullscreen application to a file doesn't make much sense, and isn't a problem I am trying to solve.

Quote:
Graphical operations and file operations are two very different things and it is IMO horrible design mistake to have them use same API. Or maybe it's just me, because i can do screen setup with just 4 lines of asm using my system framework.
I think this is a good point. But first bring it back to fullscreen text rather than graphics. Does it make sense to use file operations for this? I don't know. Note that recently I proved that you could write a comms program using fopen("com1", "r+b"). The Amiga would be "SER" instead of "com1" I believe. You don't need to create a new API, although I didn't deal with timeouts and just block forever.
kerravon is offline  
Old 13 November 2021, 01:56   #251
kerravon
Registered User
 
Join Date: Mar 2021
Location: Sydney, Australia
Posts: 184
Quote:
Originally Posted by paraj View Post
I'm still not 100% sure what you goal is, but if it's to get recompiled well-behaved DOS programs to work,
Yes, I'm only interested in well-behaved DOS programs, but also I'm willing to propose extensions to DOS so that the well-behaved programs can use those extensions too.

Quote:
I'd ignore ANSI stuff if I were you.
Why? It seems to be the correct way to draw screens to me. Are you worried about performance on an XT?
kerravon is offline  
Old 13 November 2021, 09:10   #252
Thomas Richter
Registered User
 
Join Date: Jan 2019
Location: Germany
Posts: 3,216
Quote:
Originally Posted by kerravon View Post
What does this actually mean? PDOS/386 comes with a "vga driver" that directly accesses 0xb8000. I wouldn't have thought it was technically possible to prevent an 80386 from doing that and demanding that it is accessed via segment : offset real mode instead.

It means that the VGA memory window became too limiting early enough, namely as soon as high-color and true-color screens and larger screen resolutions became the norm. Thus, VGA chipset vendors implemented "segmentation", by which you made a particular range of VGA addresses available in the VGA memory window. Thus, if you wanted to access a pixel, you had first to tell the chipset which part of the framebuffer to make visible in the VGA window, and then poked into the right place of the VGA window.


It is like the 8086 segment register nonsense, just that the segment registers are here represented by registers of the VGA chipset you had to fill to make the right part of its frame buffer accessible.


Later chipsets allowed linear addressing, but then of course left the 640K memory model of the PCs, and they allowed continous access to the entire frame buffer without having to switch the window every time. P96 only supports the latter type of chipsets, though EGS also supports some of the "segmented memory models".
Thomas Richter is offline  
Old 13 November 2021, 13:20   #253
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,323
Quote:
Originally Posted by kerravon View Post
Again, I'm not trying to solve all the world's problems simultaneously. I'm already happy with C90 programs, I'm pretty happy with the MSDOS API for disk management, and the next step up from that is fullscreen character-based applications (I'm not sure if this is the correct term). And in fact I'd even start with monochrome before moving on to another class - color.
The "start small, then grow" method usually doesn't work.


Quote:
Originally Posted by kerravon View Post
Nope. It's a generic problem in computer science. If you have programs that are designed to use 64k, and you wish to run them on a new system that has more than 64k of memory, plus allow more programs to be written that take advantage of more than 64k, how do you do it? Without bumping up the register size. Segmentation is the correct technical solution to this problem. If you're happy to just abandon the old software base, then you don't need to be concerned about this, but someone needs to bite the bullet and allow upward compatibility. As for being obsolete - it certainly wasn't obsolete in the time period (80s) that I am interested in).
Consider it as a generic problem if you want, but x86-style segmentation hasn't been used anywhere else that i know of, and this only shows 8086 was a poor design at first place.
68000 OTOH was more forward-looking design.


Quote:
Originally Posted by kerravon View Post
Nor do tiny mode programs. That's the model used by programs before the segmented machine became available. Restricted to 64k, code and data pointers the same. aka "flat".
But this is not "tiny". You could have 64-bit flat space, which is everything but tiny.


Quote:
Originally Posted by kerravon View Post
I explained it in my own words.
Still doesn't say where it came from.
Anyway, let's see it the other way : why should the user have to choose a memory model at all ?


Quote:
Originally Posted by kerravon View Post
At the source code level it can all be hidden away, depending on what you are doing. Yes, it had an impact, but the impact didn't need to be large. printf("hello world\n"); works the same at the source code level in all memory models. Start from that and keep going.
Same reply as above : starting with something minimalistic in the hope of having it grow later won't work. You'll constantly hit limits and require redesigns.


Quote:
Originally Posted by kerravon View Post
Yes, a brand new architecture. No upward compatibility from anything. Apples and oranges.
Frankly, the will of being source compatible with 8080 was a terrible mistake.


Quote:
Originally Posted by kerravon View Post
I'm talking about MSDOS running on an 8086. There is no flat mode, unless you count "tiny".
Whoa ! That's what i call "locked in the past"
So working on 8086 is wanted now ? Who will do that ? There is no interest.


Quote:
Originally Posted by kerravon View Post
Can you elaborate on the situation where this is problematic? Let's say you are on a system with 9-bit chars and you wish to store a series of bytes. So long as the high bit is always 0, what's the problem?
The problem is that you don't have a pointer to 8-bit data because your smallest one ("char") is more than that already, and therefore you can't even define a pointer type !


Quote:
Originally Posted by kerravon View Post
The disk management and memory management code doesn't get affected by graphics.
Such assumptions can be dangerous...
Imagine, one day you'll have to allocate from graphics memory (chip mem on the Amiga, which is similar in concept to GPU mem). Oh, sorry, allocation primitives just can't do that.


Quote:
Originally Posted by kerravon View Post
That can serve as a basis. In addition, one of the things I would like is for PDOS to be able to be used to build a completely new system (even the Amiga itself). For that, you need an operational compiler, editor etc. Doesn't need to be grand.
Well, MSDOS isn't exactly a completely new system.


Quote:
Originally Posted by kerravon View Post
I'm not sure that is true on a PC XT.
Then better drop the idea. Extra screen copies will take too much time.
Frame flipping is there to avoid display tearing, but there is no such thing with pure text displays.


Quote:
Originally Posted by kerravon View Post
In a situation where you don't want to display the screen until it has been completely drawn, there is no overhead. You do all your screen manipulations to a buffer that is not actually in use. When you're ready, you flip to that newly-constructed buffer, zero overhead. Then there is a delay while you copy the entire buffer from one to frame to another. But that delay will be happening at a time when the monkey at the keyboard is deciding which key to press next, so shouldn't be an issue.
Are there situations in real life where you don't want to display a text-only screen before it is completely drawn ?
I don't think so. You are designing for situations that never happen.


Quote:
Originally Posted by kerravon View Post
I'm not convinced that this "PC hardware" was necessary, or fails to be a satisfactory abstraction in and of itself when it comes time to port to the Amiga.
If you limit yourself to FAT-only disk handling disk APIs with just a little memory allocation on top of it, you're probably right. But it will backfire as soon as you try to go over these very limited features (which is perhaps something you will never do, though).


Quote:
Originally Posted by kerravon View Post
I did a search for "PC-Task" and that seems to be an x86 emulator. I'm not after that, I'm after native 680x0 applications.
I wrote we don't need an MDSOS port, so it's not about what you're after, but what 680x0 users are after. How can your project possibly interest them ?


Quote:
Originally Posted by kerravon View Post
Redirecting a fullscreen application to a file doesn't make much sense, and isn't a problem I am trying to solve.
Of course it doesn't make any sense, that was the point. You are allowing situations that make no sense.


Quote:
Originally Posted by kerravon View Post
I think this is a good point. But first bring it back to fullscreen text rather than graphics. Does it make sense to use file operations for this? I don't know. Note that recently I proved that you could write a comms program using fopen("com1", "r+b"). The Amiga would be "SER" instead of "com1" I believe. You don't need to create a new API, although I didn't deal with timeouts and just block forever.
There are a lot of things you can do from file APIs, however it does not imply you don't have to do them with normal ways too.
Is it a good idea to use file i/o for fullscreen text ? That depends what your design goals are -- but if performance is among them, i have to tell you that direct character poking on screen will always be faster than a file API that has to parse ansi codes.

That said, i fail to see any interest in supporting fullscreen text-only programs at all. We have GUIs since several decades now.
meynaf is offline  
Old 14 November 2021, 08:11   #254
kerravon
Registered User
 
Join Date: Mar 2021
Location: Sydney, Australia
Posts: 184
Ok, first of all, thanks for helping me try to understand my own goals. Unfortunately I seem to have been conflating two apparently distinct goals.

1. I want a complete development system written in C90. ie a C compiler written in C, a C library written in C, an editor written in C, a file transfer program written in C that drives a serial port and an OS written in C. PDOS-generic is my flagship OS for this. I am using a modified GCC 3.2.3 as the C compiler, which requires something like 20 MB to recompile itself, so this is not something for the 8086. I'm instead looking for something like a 32 MiB system which based on historical memory prices is something that became viable around 1992. With this development system you have everything you need to construct a "perfect" replacement OS, with graphics support etc. But this basic system only needs a monochrome text monitor to run micro-emacs. My interest basically stops here, so I want to clarify what standards are required to get here.

2. After the MSDOS 2.0 API was locked in in 1983 and the migration progress from effectively 8080 CP/M to 8086 MSDOS began, I want programming practices put in place such that when the superior Amiga 1000 was released in 1985, a simple recompile would mean that all C90-compliant programs would work (this is easy), programs that interacted with FAT directories via the MSDOS API would also work, and, if possible, fullscreen applications also work.

For the last bit of that, I thought of another possibility. You could do fopen("CON:", "r+b") which accesses (via fseek() and putc()) a 25*80*2 byte "file" which is actually simply directly accessing memory at 0xb8000 (PC implementation only). This would then fit the "everything is a file" paradigm, which C seems to be based on, and I would like to drive to its limit.

Quote:
Originally Posted by meynaf View Post
Consider it as a generic problem if you want, but x86-style segmentation hasn't been used anywhere else that i know of, and this only shows 8086 was a poor design at first place.
It wasn't a poor design, it was the exact correct technical solution to allow 16-bit applications to access more than 64k of memory. It could have been repeated when we had 32-bit applications running on machines with 8 GiB of memory, but instead CPUs evolved to be able to cope with different modes.

Quote:
But this is not "tiny". You could have 64-bit flat space, which is everything but tiny.
If, as above, you had 32-bit applications running on an 8 GiB machine, then a "tiny" mode application could potentially be occupying the memory range 3 GiB to 7 GiB and be considered "tiny".

Quote:
Still doesn't say where it came from.
Anyway, let's see it the other way : why should the user have to choose a memory model at all ?
It is the programmer, not the user, who chooses a memory model, and a standard C90 program works, completely unchanged - no need for "near" and "far" pointers.

Quote:
Whoa ! That's what i call "locked in the past"
So working on 8086 is wanted now ? Who will do that ? There is no interest.
My interest is in understanding computer science thoroughly so that if I am transported back in time to 1983, working on an MSDOS system, I know what to advise programmers.

Quote:
The problem is that you don't have a pointer to 8-bit data because your smallest one ("char") is more than that already, and therefore you can't even define a pointer type !
Sorry, I don't understand. If you have 9-bit chars and you go:

char *x;

*x = 0xab;

you have successfully defined a pointer suitable to manipulating a byte, and then manipulated it.

Quote:
Then better drop the idea. Extra screen copies will take too much time.
I doubt that copying 25*80*2 bytes from one area of memory to another using memcpy() takes a long time, even on a 4.77 MHz 8086. And it will only take place between user interactions. The memcpy() needs to be faster than a monkey pressing a key for the monkey to even be able to detect it.

Quote:
Frame flipping is there to avoid display tearing, but there is no such thing with pure text displays.
The problem as I see it is that if you have a fullscreen text application, and you have drawn screen A, and because of your complicated ANSI controls, to replace screen A with screen B (which may have included a clear screen operation), then the user may be able to see the screen being drawn in say 0.3 seconds and that may look crap and that's why people preferred to directly manipulate the memory at 0xb8000. But if the transition from screen A to screen B also involved some application logic, perhaps a disk access, then that entire 0.3 seconds will simply be added to the application pause time, and the screen update will look like it has been instantly updated.

Quote:
Are there situations in real life where you don't want to display a text-only screen before it is completely drawn ?
I don't think so. You are designing for situations that never happen.
I would like to see my screen replaced immediately rather than observe the redrawing process. Note that this is only for fullscreen applications which would be signified by setvbuf(stdout, fully-buffered).

Quote:
I wrote we don't need an MDSOS port, so it's not about what you're after, but what 680x0 users are after. How can your project possibly interest them ?
Time machine to take us back to 1983 sold separately. :-)

Quote:
That depends what your design goals are -- but if performance is among them, i have to tell you that direct character poking on screen will always be faster than a file API that has to parse ansi codes.
Yes, but is it really significant that you would tie your code down to a particular bit of hardware? I guess we would need some hard numbers. How many milliseconds can be saved by bypassing the ANSI stream? I can see that it is a bit odd to construct an ANSI data stream when you know it then needs to be interpreted on the same machine. Maybe it should be hidden away by "curses"?

Quote:
That said, i fail to see any interest in supporting fullscreen text-only programs at all. We have GUIs since several decades now.
I do all my work from the command line and in text mode. I have a mobile phone, but it's not a smartphone. I have an interest in how blind and deafblind people use computers, and also whether there is a need for morse code. I'm angling to get my BBS operational again (but using the internet instead of phone line) and retreating behind that.
kerravon is offline  
Old 14 November 2021, 11:35   #255
Thomas Richter
Registered User
 
Join Date: Jan 2019
Location: Germany
Posts: 3,216
Quote:
Originally Posted by kerravon View Post
With this development system you have everything you need to construct a "perfect" replacement OS, with graphics support etc. But this basic system only needs a monochrome text monitor to run micro-emacs. My interest basically stops here, so I want to clarify what standards are required to get here.
Actually, that's far from "perfect" given the demands on an Os today. For example, "GUI". Even the "bios" (actually UEFI) boot menu of computers nowadays come with a mouse-driven GUI, so falling behind this standard is far from what I would call "perfect".


Quote:
Originally Posted by kerravon View Post

For the last bit of that, I thought of another possibility. You could do fopen("CON:", "r+b") which accesses (via fseek() and putc()) a 25*80*2 byte "file" which is actually simply directly accessing memory at 0xb8000 (PC implementation only). This would then fit the "everything is a file" paradigm, which C seems to be based on, and I would like to drive to its limit.
This is called a "frame buffer", but the concept has its limits. Actually, what is used instead if you want to have a "text-only" output with more flexibility is something in the direction of ANSI-control characters, and a "curses" like interface that drives it with somewhat more useful primitives than "seek" and "put". On *ix*, if text output is needed, "curses" is the omnipresent interface, and not a frame buffer. The frame buffer - if it exists - is one level below. Or even multiple levels below. Curses could interact with a virtual terminal, e.g. xterm, which interprets the ANSI sequences from curses, then calls X11 to do the proper rendering, whose driver then potentially uses the framebuffer for the byte-output.




Quote:
Originally Posted by kerravon View Post
It wasn't a poor design, it was the exact correct technical solution to allow 16-bit applications to access more than 64k of memory.
No, hardly. The concept has the serious problem that you can access the same memory cell under different addresses, and you cannot simply perform arithmetics on "huge pointers". It was a big pile of mess intel didn't follow in later designs for good reason. Instead, what they should have done is to use a memory management unit, and limit the access to legacy applications to their 16 bit window somewhere, and at the same time, allow (or enforce!) enlarged 32 bit pointers with linear addressing.


The amount of overhead this pile of mess created in the "huge" memory model speaks for itself. With proper 32-bit extended registers for the huge model, the problems were gone.


Consider how AMD extended the 32bit world to 64 bit: That(!) was a proper extension. Not this terrible intel kludge.




Quote:
Originally Posted by kerravon View Post
My interest is in understanding computer science thoroughly so that if I am transported back in time to 1983, working on an MSDOS system, I know what to advise programmers.
My advice would be quite simple: Don't use this, use modern technology. Or at least some sane technology from the same age such as the 68K. Why you want to port a kludge to the 68K system and its relatively orthogonal and sane world is just beyond me.



Quote:
Originally Posted by kerravon View Post
Yes, but is it really significant that you would tie your code down to a particular bit of hardware? I guess we would need some hard numbers. How many milliseconds can be saved by bypassing the ANSI stream? I can see that it is a bit odd to construct an ANSI data stream when you know it then needs to be interpreted on the same machine. Maybe it should be hidden away by "curses"?
The advantage of "curses" is that the output does not need to be interpreted on the same machine. It can be interpreted on a completely different machine, and you have a (relatively) compact representation of the update information. Actually, that is also the beauty of the X system: Surely a bit kludgy today, but the control stream can be interpreted on a completely different machine with ease. (Which is why I don't understand why anyone wants to replace it with inferiour systems that do not support this paradigm.)
Thomas Richter is offline  
Old 14 November 2021, 12:28   #256
kerravon
Registered User
 
Join Date: Mar 2021
Location: Sydney, Australia
Posts: 184
Quote:
Originally Posted by Thomas Richter View Post
Actually, that's far from "perfect" given the demands on an Os today. For example, "GUI". Even the "bios" (actually UEFI) boot menu of computers nowadays come with a mouse-driven GUI, so falling behind this standard is far from what I would call "perfect".
I said the *replacement* can be "perfect". I provide a public domain "starter system" and then someone else can use it to implement their definition of "perfect".

Quote:
This is called a "frame buffer", but the concept has its limits. Actually, what is used instead if you want to have a "text-only" output with more flexibility is something in the direction of ANSI-control characters, and a "curses" like interface that drives it with somewhat more useful primitives than "seek" and "put". On *ix*, if text output is needed, "curses" is the omnipresent interface, and not a frame buffer. The frame buffer - if it exists - is one level below. Or even multiple levels below. Curses could interact with a virtual terminal, e.g. xterm, which interprets the ANSI sequences from curses, then calls X11 to do the proper rendering, whose driver then potentially uses the framebuffer for the byte-output.
This concept is nominally fine, but if it fails to perform adequately on a 4.77 MHz PC XT, it is not viable - programmers are going to directly access the hardware. And if they're going to do that, then we instead need a system based around a frame buffer, not the other way around. I have my first timing information - displaying a full screen using BIOS calls takes 1 second. Not really adequate. But interpreting ANSI or curses direct to frame buffer may be quicker, so I'm not ruling anything out yet.

Quote:
No, hardly. The concept has the serious problem that you can access the same memory cell under different addresses,
I'm interested in C code. I haven't noticed the above causing any problem in any C code I have written, including in PDOS/86 - an operating system.

Quote:
and you cannot simply perform arithmetics on "huge pointers".
Huge pointers are rarely used, but sure, that's the price to pay for trying to access more than 64k for a single object when using 16 bit registers that would normally expect to operate in an environment where the entire application plus OS is expected to fit in 64k.

Quote:
It was a big pile of mess intel didn't follow in later designs for good reason. Instead, what they should have done is to use a memory management unit, and limit the access to legacy applications to their 16 bit window somewhere, and at the same time, allow (or enforce!) enlarged 32 bit pointers with linear addressing.
That sounds like a more complicated and expensive design.

Quote:
The amount of overhead this pile of mess created in the "huge" memory model speaks for itself.
The huge memory model was rarely used and not even implemented on popular compilers like Turbo C.

Quote:
With proper 32-bit extended registers for the huge model, the problems were gone.
If you are using "huge = flat" terminology, sure. No-one disputes flat is easier conceptually. I dispute that it is a problem programmatically. Show me the problematic C code and I'll tell you how to write it properly.

Quote:
Consider how AMD extended the 32bit world to 64 bit: That(!) was a proper extension. Not this terrible intel kludge.
AMD didn't achieve that at the time the 8086 came out.

Quote:
My advice would be quite simple: Don't use this, use modern technology.
No, the 8086 and MSDOS 2.0 is going to get 2 years free rein until the Amiga comes out. I just want to prepare for that day. I want to advise programmers (who have been tasked by their bosses to program on the XT, and uni courses, how to program for the 8086 in such a way that the source code works on the 68000. Most of the problem is solved simply by backdating C90 and calling it C83.

Quote:
Or at least some sane technology from the same age such as the 68K. Why you want to port a kludge to the 68K system and its relatively orthogonal and sane world is just beyond me.
Which MSDOS API are you claiming is a kludge that shouldn't be ported?

Quote:
The advantage of "curses" is that the output does not need to be interpreted on the same machine. It can be interpreted on a completely different machine, and you have a (relatively) compact representation of the update information. Actually, that is also the beauty of the X system: Surely a bit kludgy today, but the control stream can be interpreted on a completely different machine with ease. (Which is why I don't understand why anyone wants to replace it with inferiour systems that do not support this paradigm.)
All things are possible when you have lots of grunt. The challenge is to deal with low capacity systems. After sorting out the 4.77 MHz XT to my satisfaction I might see what I can do with the 1 MHz Commodore 64.
kerravon is offline  
Old 14 November 2021, 15:02   #257
Thomas Richter
Registered User
 
Join Date: Jan 2019
Location: Germany
Posts: 3,216
Quote:
Originally Posted by kerravon View Post
I said the *replacement* can be "perfect". I provide a public domain "starter system" and then someone else can use it to implement their definition of "perfect".
As a starter system, that's barely a good start. It's an obsolete system. If you want to start something, start from something that follows a good technical design and that builds on state of the art, but not an obsolete system with obvious limitations that requires kludges to work around.


Quote:
Originally Posted by kerravon View Post
I'm interested in C code. I haven't noticed the above causing any problem in any C code I have written, including in PDOS/86 - an operating system.
If you're interested in C code, why do you bother about memory models? C doesn't know "memory models". C is build on top of a "virtual machine" with a linear address space. The 8086 memory model is a rather alien ecosystem for C, and that's why these kludges of "near" and "far" pointers exist.




Quote:
Originally Posted by kerravon View Post

Huge pointers are rarely used, but sure, that's the price to pay for trying to access more than 64k for a single object when using 16 bit registers that would normally expect to operate in an environment where the entire application plus OS is expected to fit in 64k.
This is the price to pay for an ill-designed system, but again, why do you bother if this is "C only". Why not use a sane system as basis where C can work on top with much less "irregularities".


Quote:
Originally Posted by kerravon View Post


That sounds like a more complicated and expensive design.
It is actually a much simpler system. Physical address = segment pointer + offset register. Just that the segment pointer is hidden in a virtual machine, and not accessible legacy applications, and the offset register is all that is seen. It would be the job of the Os to populate the segment pointer in a process-dependent way to allow legacy applications to work. But intel, in their infinite wisdom, created a kludge, something they abandoned in the next generation.




Quote:
Originally Posted by kerravon View Post



If you are using "huge = flat" terminology, sure. No-one disputes flat is easier conceptually. I dispute that it is a problem programmatically. Show me the problematic C code and I'll tell you how to write it properly.
The problem is that you have to use two types of pointers "near" and "far", and that C doesn't have such concepts. You cannot just "subtract" two far pointers and get a difference in numbers of objects - the C compiler has to generate code to normalize them, which is inefficient. You cannot subtract near and far pointers as they are of different kind.


Quote:
Originally Posted by kerravon View Post




AMD didn't achieve that at the time the 8086 came out.
Apparently not, no, but they didn't come up with "segment pointers" once again - which is the way how intel addressed this issue when "64K is enough for everyone" was no longer acceptable.


Quote:
Originally Posted by kerravon View Post






No, the 8086 and MSDOS 2.0 is going to get 2 years free rein until the Amiga comes out. I just want to prepare for that day.
There is nothing to prepare for. The day passed. The days of MS-DOS passed long time ago. The days of AmigaOs passed away. Thus, what's the purpose of this? Prepare for something that is no longer needed? What's the application?


Quote:
Originally Posted by kerravon View Post
I want to advise programmers (who have been tasked by their bosses to program on the XT, and uni courses, how to program for the 8086 in such a way that the source code works on the 68000. Most of the problem is solved simply by backdating C90 and calling it C83.
The advice is simple. "Don't". This is obsolete old junk. There is nothing to be demonstrated except that the system was ill-designed. If you want to give someone an advice how to do things, start with the good examples, not the bad ones. You don't program the 8086 today anymore. Neither the 68K - both are museum pieces, interesting parts of technology, one less and one more orthogonal, one less and one more forward looking, but that's about it.



Quote:
Originally Posted by kerravon View Post

Which MSDOS API are you claiming is a kludge that shouldn't be ported?
Pretty much all of it. Single-threaded, incomplete interface, not powerful enough for today's applications. It's not how you define an Os interface today. There are reasons M$ left MS-DOS behind with Windows NT, it wasn't simply suitable as foundation of an operating system.


Quote:
Originally Posted by kerravon View Post


All things are possible when you have lots of grunt. The challenge is to deal with low capacity systems. After sorting out the 4.77 MHz XT to my satisfaction I might see what I can do with the 1 MHz Commodore 64.
A low-capacity system today is not a 8086 system. Neither a 6502 system. A low-capacity system today is an arm mobile core, as in the raspi, or an arduino core. This is what I would advice as "cheap embedded system" today. Not the obsolete systems of yesterday with their architectural kludges. If your interest "low capacity", that's called "embedded", and 8086 is not quite the answer for this market segment.
Thomas Richter is offline  
Old 14 November 2021, 20:22   #258
kerravon
Registered User
 
Join Date: Mar 2021
Location: Sydney, Australia
Posts: 184
Quote:
Originally Posted by Thomas Richter View Post
As a starter system, that's barely a good start. It's an obsolete system. If you want to start something, start from something that follows a good technical design and that builds on state of the art, but not an obsolete system with obvious limitations that requires kludges to work around.
The starter system I proposed is PDOS-generic. It's not "obsolete", it's "proof of concept". And it does not necessarily need to be used as the source code for the replacement system. It can be used just to develop the replacement, since it has an editor etc.

Quote:
If you're interested in C code, why do you bother about memory models? C doesn't know "memory models". C is build on top of a "virtual machine" with a linear address space. The 8086 memory model is a rather alien ecosystem for C, and that's why these kludges of "near" and "far" pointers exist.
As I understand it, the C90 standard was carefully constructed to take into consideration alien environments like the 8086 and mainframes. I've programmed both, and the C90 standard is fantastic.

When you write some C code (as I do) and then wish to run on the 8086, you can do so simply by telling the C compiler which memory model you wish to use, and you don't need to use "near" and "far".

Quote:
This is the price to pay for an ill-designed system, but again, why do you bother if this is "C only". Why not use a sane system as basis where C can work on top with much less "irregularities".
C works perfectly fine, unkludged, on an 8086. It is C programmers who need to change, not the 8086. The 8086 served its purpose perfectly adequately.

Quote:
It is actually a much simpler system. Physical address = segment pointer + offset register. Just that the segment pointer is hidden in a virtual machine, and not accessible legacy applications, and the offset register is all that is seen. It would be the job of the Os to populate the segment pointer in a process-dependent way to allow legacy applications to work.
They already do exactly this. MSDOS will start an executable with cs and ds already set, and a tiny memory model program need never touch these registers.

Quote:
The problem is that you have to use two types of pointers "near" and "far", and that C doesn't have such concepts.
You don't "have to" do that at all. Just don't code them. It's that simple. Just choose an appropriate memory model and let the C compiler generate the appropriate code.

Quote:
You cannot just "subtract" two far pointers and get a difference in numbers of objects - the C compiler has to generate code to normalize them,
If you are doing pointer subtraction, it should be to the same object. And that object will be less than 64k in size in all memory models except a properly-implemented "huge". The pointers will all be normalized already. Why don't you give me some C code that you think is of concern, and I'll show you the generated code from compact or large memory model?

Quote:
which is inefficient. You cannot subtract near and far pointers as they are of different kind.
I won't answer that because I've never found a reason to do that, and it's likely illegal in C90.

Quote:
Apparently not, no, but they didn't come up with "segment pointers" once again - which is the way how intel addressed this issue when "64K is enough for everyone" was no longer acceptable.
They had more options available.

Quote:
There is nothing to prepare for. The day passed. The days of MS-DOS passed long time ago. The days of AmigaOs passed away. Thus, what's the purpose of this? Prepare for something that is no longer needed? What's the application?
I want to understand what went wrong in that era. It may have implications for the future. Maybe the same issues will reoccur with quantum computers or something.

Quote:
The advice is simple. "Don't". This is obsolete old junk. There is nothing to be demonstrated except that the system was ill-designed. If you want to give someone an advice how to do things, start with the good examples, not the bad ones. You don't program the 8086 today anymore. Neither the 68K - both are museum pieces, interesting parts of technology, one less and one more orthogonal, one less and one more forward looking, but that's about it.
I'm still learning from this technology, and we are still debating how it even works.

Quote:
Pretty much all of it. Single-threaded, incomplete interface, not powerful enough for today's applications. It's not how you define an Os interface today. There are reasons M$ left MS-DOS behind with Windows NT, it wasn't simply suitable as foundation of an operating system.
Note that my suggestion is to use 32-bit PDOS-generic with associated tools, as the basis to develop some other arbitrary 32-bit OS. If you're ever in a situation where you're not allowed to use copyrighted code.

Quote:
A low-capacity system today is not a 8086 system. Neither a 6502 system. A low-capacity system today is an arm mobile core, as in the raspi, or an arduino core. This is what I would advice as "cheap embedded system" today. Not the obsolete systems of yesterday with their architectural kludges. If your interest "low capacity", that's called "embedded", and 8086 is not quite the answer for this market segment.
I'm not trying to answer a segment, I'm trying to understand what ideally would have happened in software development, holding the hardware constant. ie what specific mistake was made in the Commodore 64 OS (or apps), or MSDOS (or MSDOS apps). It is not appropriate to say "invent and produce long mode and Windows 11 in 1927".
kerravon is offline  
Old 14 November 2021, 22:32   #259
Thomas Richter
Registered User
 
Join Date: Jan 2019
Location: Germany
Posts: 3,216
Quote:
Originally Posted by kerravon View Post
The starter system I proposed is PDOS-generic. It's not "obsolete", it's "proof of concept".
If you want to prove a concept, why start with an insane and obsolete concept? I'm sorry, I just don't get it.


Quote:
Originally Posted by kerravon View Post

C works perfectly fine, unkludged, on an 8086. It is C programmers who need to change, not the 8086. The 8086 served its purpose perfectly adequately.
No, it doesn't, that's exactly the point. Look at the code a C compiler has to generate for a specific "memory model" on this processor. A "pointer" can, depending on the memory model, be a completely different beast, and subtracting two pointers, or adding an offset to a pointer, requires rather elusive code.


Quote:
Originally Posted by kerravon View Post
They already do exactly this. MSDOS will start an executable with cs and ds already set, and a tiny memory model program need never touch these registers.
*Sigh*. Yes, all fine for "tiny" programs. But at some point, tiny was too tiny. But instead of fixing this by introducing proper "large pointers" with 32 bit, and the option to select that at a per-program base as it would have been sane, intel came up with this kludge.



Quote:
Originally Posted by kerravon View Post


If you are doing pointer subtraction, it should be to the same object.
Which can easily be larger than 64K, yes. C doesn't limit the size of objects, not at all. Thus, kludgy.



Quote:
Originally Posted by kerravon View Post
And that object will be less than 64k in size in all memory models except a properly-implemented "huge". The pointers will all be normalized already. Why don't you give me some C code that you think is of concern, and I'll show you the generated code from compact or large memory model?

Here you go:
Code:
int offset(char *base,char *ptr)
{
 return ptr - base;
}


char big[1<<20];


int main(void)
{
 printf("Offset is %d\n",big,&big[1<<17]);
 return 0;
}
Perfectly fine C code. Tough for 8086, does either not compile, or requires "pointer normalization" to make it work. That's what is kludgy. A sane processor design would have had a processor flag, "hey, my registers are now 32 bit", and not this "base segment register" nonsense.


Quote:
Originally Posted by kerravon View Post
I want to understand what went wrong in that era. It may have implications for the future. Maybe the same issues will reoccur with quantum computers or something.
What "went wrong"? Well, people started with "ad hoc" designs, in some software companies (MS-DOS was just bought, not designed) and some hardware companies (the intel kludge). People had no experience with large design, or about things like "design". Industry lacked the experience that something like a design is necessary.


Now, I believe, you're in the process of making the same mistake. Instead of thinking about the design of a sane system, you're copying from an insane one, one not much thought was given in its design.




Quote:
Originally Posted by kerravon View Post

Note that my suggestion is to use 32-bit PDOS-generic with associated tools, as the basis to develop some other arbitrary 32-bit OS. If you're ever in a situation where you're not allowed to use copyrighted code.
If I wouldn't be allowed, I would certainly not pick some amateur (sorry) Os that is based on the "design principles" (as in not-really-existing) of MS-Dos. Sorry, but that's not how industry works. Typically, people will either use the Os the customer requires to be used (either Windows, Mac, Linux, or some embedded Os, which is typically also build around Linux), or will probably provide something they had designed in-house they have full control over. Copyright is not the only question. Maintainence is important as well, and in case of doubt, it's easier just to buy what you need than to depend on a hobby system. This worked for MS-DOS last century, but the industry doesn't work this way any more.




Quote:
Originally Posted by kerravon View Post


I'm not trying to answer a segment, I'm trying to understand what ideally would have happened in software development, holding the hardware constant. ie what specific mistake was made in the Commodore 64 OS (or apps), or MSDOS (or MSDOS apps). It is not appropriate to say "invent and produce long mode and Windows 11 in 1927".

This question does not even make sense to me. Hardware isn't held constant, it is something that evolves along with software, intertwined with each other. If software hadn't developed things like loops or subroutines, we wouldn't have "jump subroutine" or "stack pointer". If hardware hadn't provided more computational power and larger memory sizes, we wouldn't have high-level languages.


C64 "Os" was not an Os, it was just a collection of useful subroutines. The Atari 8-bit Os was something that was close to be an Os, with a channel-based I/O system and several abstraction layers. Yet, of course, it didn't survive the test of time because it didn't have services for memory allocation, and the hardware did not have memory protection. AmigaOs did not survive, and even if Amiga hardware had, CBM would have had to invented something new and replace AmigaOs as resource management was absent. MS-DOS did not surive for similar reasons, and was obsoleted by Windows NT. MacOs did not survive and was replaced by a BSD-type kernel.


But I don't need to write my own Os to understand that. I've programmed on some of the above systems (at least Atari, Amiga, Mac, Unix, Windows) to understand why some designs did not survive, and others did.


Start from the good designs that passed the test of time. The unix system seems to be fairly good, according to this test. That is something worth investigating, and not MS-DOS.
Thomas Richter is offline  
Old 15 November 2021, 06:41   #260
kerravon
Registered User
 
Join Date: Mar 2021
Location: Sydney, Australia
Posts: 184
Quote:
Originally Posted by Thomas Richter View Post
If you want to prove a concept, why start with an insane and obsolete concept? I'm sorry, I just don't get it.
The way I used MSDOS in the late 80s is almost unchanged from the way I use Putty to connect to Linux at my paid work in 2021. I use the command line to invoke some sort of cut-down emacs to edit C source code, I compile using a command-line C compiler, and then I invoke the program with arguments. I debug with printf statements. And I repeat that process again and again. And I am effective.

What I have with PDOS-generic is an OS, written in pure C90, that performs that basic functionality, assuming an appropriate BIOS is provided. And if no appropriate BIOS is provided, that is the only thing I need to write for any new platform.

Quote:
No, it doesn't, that's exactly the point. Look at the code a C compiler has to generate for a specific "memory model" on this processor. A "pointer" can, depending on the memory model, be a completely different beast, and subtracting two pointers, or adding an offset to a pointer, requires rather elusive code.
I have looked at generated code for different memory models, and it is only the offset that is adjusted.

Quote:
*Sigh*. Yes, all fine for "tiny" programs. But at some point, tiny was too tiny. But instead of fixing this by introducing proper "large pointers" with 32 bit, and the option to select that at a per-program base as it would have been sane, intel came up with this kludge.
It is selected on a per-program basis. The programmer chooses the memory model appropriate to the task at hand. The existing tiny programs work out of the box. Ones that need more memory have far more than the 64k limit otherwise expected. 640k really was a lot of memory at the time, and the 8086 was perfectly fine and appropriate. You only notice an issue within that 640k when you start trying to make single objects more than 64k, something way beyond any expectations you should have for 16-bit registers.

Quote:
Which can easily be larger than 64K, yes. C doesn't limit the size of objects, not at all. Thus, kludgy.
No. C does limit the size of objects - to size_t, which in all memory models, with the possible exception of huge, is 16 bit. The bottom line is if you are exceeding 64k for single objects you are trying to stretch the 8086 beyond what it was designed to do (transition from 64k total for everything), and it's time to upgrade to the 80386.

Quote:
char big[1<<20];
Perfectly fine C code. Tough for 8086, does either not compile, or requires "pointer normalization" to make it work.
That is a 1 MiB array. You can't have more than 64k of data in all memory models except possibly huge, and regardless, you are exceeding size_t for this platform. This is not the purpose of the 8086.

Quote:
That's what is kludgy. A sane processor design would have had a processor flag, "hey, my registers are now 32 bit", and not this "base segment register" nonsense.
Intel were not on drugs when they designed the 8086, and IBM was not on drugs when they selected the 8086. It is a relatively cheap 16-bit register CPU, not an expensive 32-bit one. And it provides a transition for a 16-bit code base.

Quote:
If I wouldn't be allowed, I would certainly not pick some amateur (sorry) Os that is based on the "design principles" (as in not-really-existing) of MS-Dos. Sorry, but that's not how industry works. Typically, people will either use the Os the customer requires to be used (either Windows, Mac, Linux, or some embedded Os, which is typically also build around Linux), or will probably provide something they had designed in-house they have full control over. Copyright is not the only question. Maintainence is important as well, and in case of doubt, it's easier just to buy what you need than to depend on a hobby system. This worked for MS-DOS last century, but the industry doesn't work this way any more.
Anything you "buy in doubt" comes with strings attached. Public domain code you can convert at will into anything you want and your company takes sole ownership of with regard to maintenance and future sales. I'm not saying it's for everyone. I'm not saying it's for no-one either, because I haven't surveyed every company in the world. I do know of a company in Finland that picked up my public domain zmodem code and took responsibility for maintenance of their derived product, because they sent me some bug fixes (there was no obligation for them to do that, they were just nice).

Quote:
This question does not even make sense to me. Hardware isn't held constant, it is something that evolves along with software, intertwined with each other. If software hadn't developed things like loops or subroutines, we wouldn't have "jump subroutine" or "stack pointer". If hardware hadn't provided more computational power and larger memory sizes, we wouldn't have high-level languages.
Are you saying that in 1983 I couldn't have jumped in with the PosFindFirst() etc wrappers to the MSDOS API because it requires divine knowledge?

Quote:
Start from the good designs that passed the test of time. The unix system seems to be fairly good, according to this test. That is something worth investigating, and not MS-DOS.
Popular is one area to investigate. Fast is another area. Small is another area. Cheap is another. Simple is another. Public domain is another.
kerravon is offline  
 


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +2. The time now is 13:18.

Top

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.
Page generated in 0.19706 seconds with 15 queries