English Amiga Board


Go Back   English Amiga Board > Main > Retrogaming General Discussion

 
 
Thread Tools
Old 19 April 2020, 13:03   #101
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,323
Quote:
Originally Posted by Gorf View Post
That you do not have to change it during a context switch.
Then what is "change" for you ? Alter something in the table itself, or switch to another one ?


Quote:
Originally Posted by Gorf View Post
Yes - one table
Containing all information for all tasks in the system ?


Quote:
Originally Posted by Gorf View Post
NO!
This is where it starts to break.
Sorry, but if you have a global task, then it has to include these lists - regardless of the shape they take.


Quote:
Originally Posted by Gorf View Post
No the conclusion is absurd.
You need just one list.
You have a number of areas and a number of tasks. So you have to link them in one way or another. So you can group this info by task (usually the case) or by area, but a single mixed big list is nonsense. In a dbase this is n-to-n link.


Quote:
Originally Posted by Gorf View Post
On a file server with hundreds of users and millions of files, you do not need hundreds of lists with millions of entries to organize permissions.
You only have a few per file, that your filesystem manages.
Same goes here for tasks and memory regions.
The story is indeed the same : you can group your rights by user (giving what's allowed) or by file/directory (giving who's allowed here). There is no other way.


Quote:
Originally Posted by Gorf View Post
Luckily we need just one
You haven't read what i wrote, have you ?


Quote:
Originally Posted by Gorf View Post
And now we can keep everything at least in the 2nd level cache of our MPU (2-4 thousand lines) and we can share this cache with many cores.
Absolutely not. Checking access rights mustn't take more time than a single pipeline step - for this 2nd cache level is too slow (several clocks access time needed).
Failure to apply security checks in time has led to security flaws in cpus, even if it's just for the speculative execution.


Quote:
Originally Posted by Gorf View Post
Above line count was for fixed size - if you use a variable size you can reduce the number of lines but you complicate the handling...
So many lines. A lot too much to handle for something that ought not be bigger than the regular TLB (IOW you have to do intermediate caching with something faster - and this intermediate, fast cache, will die upon task switches).


Quote:
Originally Posted by Gorf View Post
Since you do not actually use the full 64Bit address space and probably never will in our lifetime the number of actual entries gets drastically reduced.
Right, but if you have fixed page size then you have to provide a way for the MPU to know where the limit is. Easy, but it still complicates things a little.


Quote:
Originally Posted by Gorf View Post
About 4K lines for 64GB of ram and 16KB regions.
And what is the exact information stored in a line here ?
Does it contain all infos about all allowed tasks, or does it redirect to something else ?


Quote:
Originally Posted by Gorf View Post
If you want a more fine grained structure, you can accompany this with tagged memory, down to the byte level.
(See lowRISC cpu)
That takes up a lot of space...


Quote:
Originally Posted by Gorf View Post
See above.
That's starting to get complicated.


Quote:
Originally Posted by Gorf View Post

They don’t need to change if you change them?
That makes no sense...
What's not clear in : in the classical way the tables don't need to change more than in your way ?


Quote:
Originally Posted by Gorf View Post
Of course not, but the switch is costly enough, since you Have to flush you buffers ...
You have to flush access rights of previous task, too. In both cases memory doesn't change, but something has to.


Quote:
Originally Posted by Gorf View Post
See the paper from Microsoft I linked earlier..
Tech has evolved since...


Quote:
Originally Posted by Gorf View Post
Only as long as your tasks don’t want to exchange any information...
??? There's no link between task switch and tasks exchanging information.


Quote:
Originally Posted by Gorf View Post
And since tasks now spawn multiple thread ... well I somehow doubt, that the number of switches got actually significantly reduced.
Are there studies?
I don't know if there are studies or not, but simply looking at a task manager will tell you how many tasks are currently running.


Quote:
Originally Posted by Gorf View Post
How dare I to post actual information...
If only you did...


Quote:
Originally Posted by Gorf View Post
Or because it takes to long, or there are a lot of pictures, or a video, or just some very good explanation by an expert ....
Write a small example of a small list, and describe what happens in case of a context switch. This is concrete, not too long, and not just a blurry theory.


Quote:
Originally Posted by Gorf View Post
Did you never read books in school/university, but demanded, that your teachers explains everything for you?
How did they react?
If i recall correctly, i didn't have to ask... Remember, when you were at school, what did teachers do all day long ?


Quote:
Originally Posted by Gorf View Post
And if your teachers told you to read a chapter until tomorrow for a test, you would think, they don’t know what’s in the book?
How cool if they just did that. I would have spent just a few time reading, and no need to go at school for anything more than the test...


Quote:
Originally Posted by Gorf View Post
Probably not more than you need me
Obviously not, but me, i have something that works. I could even show it if PM'ed.


Quote:
Originally Posted by Gorf View Post
Again making assumptions
I actually don’t...
Everyone has time these days but not you ? That's very possible. Yet lack of time has always been a nice excuse to do nothing...
But if you don't do it due to lack of time, nobody will. It's your idea, so you have to do it.
meynaf is offline  
Old 19 April 2020, 13:13   #102
Samurai_Crow
Total Chaos forever!
 
Samurai_Crow's Avatar
 
Join Date: Aug 2007
Location: Waterville, MN, USA
Age: 49
Posts: 2,187
Quote:
Originally Posted by Gorf View Post
I called it "regions" in the last message ... and there I took 16KB as region-size for a computer with 64GB installed...

(This is obviously for a 64bit cpu)

If you have less RAM you can make the regions smaller or work with a smaller table ... I would recommend the second option
4 million entries then. Not 4k, 4M.
Samurai_Crow is offline  
Old 19 April 2020, 14:05   #103
AJCopland
Registered User
 
Join Date: Sep 2013
Location: Beeston, Nottinghamshire, UK
Posts: 238
None of this sounds like it has anything to do with any kind of Amiga.

It's just OS discussion for a non-AmigaOS, which would run on some super CPU which again has nothing to do with the Amiga except a vague passing resemblence to a strange variant of the ideas which might have gone into the 68k design... once upon a time.

If your answer to the question: "If you had the chance to build a new Amiga?" is "Build something that is nothing like an Amiga" then have you really answered the question?

*EDIT*
It's kind of like Gunnar in this thread saying that the Blitter just isn't useful because the CPU can do the job http://www.apollo-core.com/knowledge...1&x=1&z=Uwlo2i
The CPU can do every job, it's a general purpose piece of hardware, just not as well as dedicated hardware can.
When asked what can be done to make the Blitter better, the answer isn't to use the CPU, it's to fix it's deficiencies and to expand on it's capabilities to keep doing things that the CPU would be much slower at!
AJCopland is offline  
Old 19 April 2020, 14:22   #104
Gorf
Registered User
 
Gorf's Avatar
 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 2,295
Quote:
Originally Posted by meynaf View Post
Then what is "change" for you ? Alter something in the table itself, or switch to another one ?
A switch is a change
Quote:
Containing all information for all tasks in the system ?
Of course not all informations - It contains the permissions per region

Quote:
This is where it starts to break.
Sorry, but if you have a global task, then it has to include these lists - regardless of the shape they take.
We talk about totals different concepts it seems ...


Quote:
You have a number of areas and a number of tasks. So you have to link them in one way or another.
Sure - permissions per region in the MPU table


Quote:
So you can group this info by task (usually the case) or by area,
Still the second option


Quote:
but a single mixed big list is nonsense. In a dbase this is n-to-n link.
That is why I never suggested such nonsense...


Quote:
The story is indeed the same : you can group your rights by user (giving what's allowed) or by file/directory (giving who's allowed here). There is no other way.
Never said something else...
Quote:
You haven't read what i wrote, have you ?
Mutual feelings..

Quote:
Absoluten not. Checking access rights mustn't take more time than a single pipeline step - for this 2nd cache level is too slow (several clocks access time needed).
Again: that is not what I wrote!

Please read it again. I did never say the the this 2nd level MPU chache is fast enough. I said it is big enough - so we do not need falling back to actual RAM ...
and now think why I wrote "2nd" level ! Because there is a 1st level!
Quote:
So many lines.
A MMU table for this configuration has the same - but in my case the entries are 50% shorter, even with thousands of task IDs and groups in the System

Quote:
lot too much to handle for something that ought not be bigger than the regular TLB (IOW you have to do intermediate caching with something faster - and this intermediate, fast cache, will die upon task switches).
It has not more to handle than in the MMU case - and it duds not need to flush the TLB but can leave most of the old entries in ... so multiple jumps between two tasks will often need no change in the TLB at all.

Also good predictions can be made, what addresses are needed next if an instruction uses immidiates.

Quote:
Right, but if you have fixed page size then you have to provide a way for the MPU to know where the limit is. Easy, but it still complicates things a little.
Actually the MPU does not necessarily need to know ... everything beyond the physical address space will cause fault anyway... but it would sure me nice to have.

Quote:
And what is the exact information stored in a line here ?
Does it contain all infos about all allowed tasks, or does it redirect to something else ?
No redirections. In my case the task-"family"ID (first 16bit of task ID ... all task staring with that ID have the same rights (My equivalent to threads)12 bits of group ID, 4 bits of permission type ) = 32 bits


Quote:

That takes up a lot of space...
Yes. Tagged Memory can be costly.. just wanted to mention the option.


Quote:
That's starting to get complicated.
Not really

Quote:
What's not clear in : in the classical way the tables don't need to change more than in your way ?
Again - what is "change" in this context.
The association is the other way around in the MMU case and different for every task - that is harder to cache and needs flushing (and/or tagging) of the TLB..

Quote:
You have to flush access rights of previous task, too.
No

Quote:
Tech has evolved since...
But nothing much has changed in the way paging is handled by the MMU.. so is still relevant.

Quote:
??? There's no link between task switch and tasks exchanging information.
But there is.
Please don’t make such false claims ...

Quote:
I don't know if there are studies or not, but simply looking at a task manager will tell you how many tasks are currently running.
But they usually don’t show threads .. And they don’t show the number of context switches per second...

Quote:
Write a small example of a small list, and describe what happens in case of a context switch. This is concrete, not too long, and not just a blurry theory.
Not interested in jumping through loops you are holding up...

Quote:
If i recall correctly, i didn't have to ask... Remember, when you were at school, what did teachers do all day long ?
Smoke? Drink?


Quote:
How cool if they just did that. I would have spent just a few time reading, and no need to go at school for anything more than the test...
Obviously not, but me, i have something that works. I could even show it if PM'ed.
Big secret ?

Quote:
Everyone has time these days but not you ? That's very possible. Yet lack of time has always been a nice excuse to do nothing...
But if you don't do it due to lack of time, nobody will. It's your idea, so you have to do it.
In this case we work actually helps other people and is therefore clearly more important then proof of concepts of an fancy hoppy cpu/os..
Gorf is offline  
Old 19 April 2020, 14:27   #105
Gorf
Registered User
 
Gorf's Avatar
 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 2,295
Quote:
Originally Posted by Samurai_Crow View Post
4 million entries then. Not 4k, 4M.
And with 32 bits per entry that is 16MB ... the same amount a AMD threadripper has cache per die
Gorf is offline  
Old 19 April 2020, 14:39   #106
Gorf
Registered User
 
Gorf's Avatar
 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 2,295
Quote:
Originally Posted by AJCopland View Post
None of this sounds like it has anything to do with any kind of Amiga.
But wasn’t that the question the thread starter asked?
A modern Amiga with custom chips and lots of power?

Quote:

It's just OS discussion for a non-AmigaOS, which would run on some super CPU which again has nothing to do with the Amiga except a vague passing resemblence to a strange variant of the ideas which might have gone into the 68k design... once upon a time.
So? That is the answer to the question...

Quote:
If your answer to the question: "If you had the chance to build a new Amiga?" is "Build something that is nothing like an Amiga" then have you really answered the question?
What exact rules need to be followed so we can call it a modern powerful Amiga with custom chips ...

Quote:
*EDIT*
It's kind of like Gunnar in this thread saying that the Blitter just isn't useful because the CPU can do the job http://www.apollo-core.com/knowledge...1&x=1&z=Uwlo2i
The CPU can do every job, it's a general purpose piece of hardware, just not as well as dedicated hardware can.
Lol
I did read that post earlier and was thinking:

Quote:

When asked what can be done to make the Blitter better, the answer isn't to use the CPU, it's to fix it's deficiencies and to expand on it's capabilities to keep doing things that the CPU would be much slower at!
We been stuck a little bit an the cpu and memory management here, but to be fair I mentioned a special vector unit as copper-like coprocessor earlier and also some properties of the gfx chips I would like to see in my dream Amiga.


So what how would your Amiga look like?
Gorf is offline  
Old 19 April 2020, 14:40   #107
Samurai_Crow
Total Chaos forever!
 
Samurai_Crow's Avatar
 
Join Date: Aug 2007
Location: Waterville, MN, USA
Age: 49
Posts: 2,187
Quote:
Originally Posted by Gorf View Post
And with 32 bits per entry that is 16MB ... the same amount a AMD threadripper has cache per die
I was hoping for something that would run on a Vampire or similar: 16k code cache and 32k data cache per core x2 for hyperthreading. (The second thread runs blitter emulation on AmigaOS.) The Vampire stand-alone v4 has 512M of DDR3 RAM so 128k table size? I suppose it could be cacheable.
Samurai_Crow is offline  
Old 19 April 2020, 14:59   #108
Gorf
Registered User
 
Gorf's Avatar
 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 2,295
Quote:
Originally Posted by Samurai_Crow View Post
I was hoping for something that would run on a Vampire or similar: 16k code cache and 32k data cache per core x2 for hyperthreading. (The second thread runs blitter emulation on AmigaOS.) The Vampire stand-alone v4 has 512M of DDR3 RAM so 128k table size? I suppose it could be cacheable.
According to Gunnar the 68080 has some kind of memory protection functions ... but they are not exposed and nobody knows what they are actually capable off...

We were here talking about going 64 bits or not and so we came to the specs I endet up with.. we can scale this down for something 32bit with little ram of course...

With 512MB and 32KB regions we would have only 16K lines in the table ..
We could reduce the entry to group-permissions only -> 16 bits per entry

32KB RAM ...
Gorf is offline  
Old 19 April 2020, 16:10   #109
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,323
Quote:
Originally Posted by Gorf View Post
A switch is a change
Then there aren't less changes in your system than in the others. Because there aren't less switches.


Quote:
Originally Posted by Gorf View Post
Of course not all informations - It contains the permissions per region
Yes but permissions for all tasks in the system.
Therefore, when looking for one permission for one access, we have to find out in which region it belongs, and then what the permissions are.


Quote:
Originally Posted by Gorf View Post
We talk about totals different concepts it seems ...
Yikes ! A typo has escaped.
Let's try again : Sorry, but if you have a global table, then it has to include these lists - regardless of the shape they take.


Quote:
Originally Posted by Gorf View Post
Sure - permissions per region in the MPU table
Ok so there is one permission list for every region. This appears to imply a region permissions list of variable size, as there can be any number of tasks accessing it, right ?


Quote:
Originally Posted by Gorf View Post
That is why I never suggested such nonsense...
But you suggested single list, which is this nonsense...


Quote:
Originally Posted by Gorf View Post
Never said something else...
Your post appeared to suggest otherwise...


Quote:
Originally Posted by Gorf View Post
Mutual feelings..
I read what you write. It's just that it's either unclear, incomplete, or wrong.


Quote:
Originally Posted by Gorf View Post
Again: that is not what I wrote!

Please read it again. I did never say the the this 2nd level MPU chache is fast enough. I said it is big enough - so we do not need falling back to actual RAM ...
and now think why I wrote "2nd" level ! Because there is a 1st level!
So there is a 1st level ! Wow. But, it's precisely that 1st level that's responsible of the whole performance ! The 2nd one is completely out of it.
Yet you were very silent about that 1st level.


Quote:
Originally Posted by Gorf View Post
A MMU table for this configuration has the same - but in my case the entries are 50% shorter, even with thousands of task IDs and groups in the System
No. There is no single MMU table for everything here. There is one table per task and it is shorter than your big table containing your block of entries.
Overall, your lists will be shorter, but the valid data at one moment will be bigger because it will contain irrelevant permission values (those of not currently active tasks).


Quote:
Originally Posted by Gorf View Post
It has not more to handle than in the MMU case - and it duds not need to flush the TLB but can leave most of the old entries in ... so multiple jumps between two tasks will often need no change in the TLB at all.
It is not impossible current cpu design allows quick return to same task without TLB invalidation. Perhaps some research has to be made before you claim you can beat state-of-the-art cpus.


Quote:
Originally Posted by Gorf View Post
Also good predictions can be made, what addresses are needed next if an instruction uses immidiates.
This is not new and not specific to your system.


Quote:
Originally Posted by Gorf View Post
No redirections. In my case the task-"family"ID (first 16bit of task ID ... all task staring with that ID have the same rights (My equivalent to threads)12 bits of group ID, 4 bits of permission type ) = 32 bits
Ouch. So when looking into the list, a lot of data regarding other tasks has to be read and canceled. Not my definition of something performant.
Since when reading a whole pack of 32-bit blocks is better than finding the relevant data directly ?


Quote:
Originally Posted by Gorf View Post
Not really
Well, maybe it was indeed already too complicated before...

Anyway...
Basically you want to replace normal paging by flat space with simple access rights.
I don't think this is a good idea.
It's not significantly more performant than paging (this is at least questionable), and does not provide all advantages paging does (which is a sure thing).


Quote:
Originally Posted by Gorf View Post
Again - what is "change" in this context.
The association is the other way around in the MMU case and different for every task - that is harder to cache and needs flushing (and/or tagging) of the TLB..
Change in this context is just going to another MMU list.
This makes the TLB contents obsolete.
Your L1 cache will be obsolete as well.


Quote:
Originally Posted by Gorf View Post
No
Yes there will be some flush, even if it's just the L1 cache you talk about above.
In both cases, access to new data will be an attempt that will fail because nothing is in the cache in regard to it... except maybe that a regular MMU can probably do some kind of prefetching...


Quote:
Originally Posted by Gorf View Post
But nothing much has changed in the way paging is handled by the MMU.. so is still relevant.
From outside (= programmatic) point of view, probably. But the implementation has changed. Performance has changed. It's not relevant anymore.


Quote:
Originally Posted by Gorf View Post
But there is.
Please don’t make such false claims ...
No, there is none. You are making the false claim.
Frankly, what is the link ? You can explain or you're just asserting it without proof ?


Quote:
Originally Posted by Gorf View Post
But they usually don’t show threads .. And they don’t show the number of context switches per second...
Even Windows task manager can show threads. And they don't need to show the number of context switches, just look at how many active tasks there are and compare with your number of cores (actually hardware threads).


Quote:
Originally Posted by Gorf View Post
Not interested in jumping through loops you are holding up...
Proof made that you have nothing concrete to show.


Quote:
Originally Posted by Gorf View Post
Smoke? Drink?
I think i understand better now...


Quote:
Originally Posted by Gorf View Post
Big secret ?
No. As said, PM is enough.


Quote:
Originally Posted by Gorf View Post
In this case we work actually helps other people and is therefore clearly more important then proof of concepts of an fancy hoppy cpu/os..
Don't tell me you will never have the time.

And, by the way, why starting with the memory protection anyway ? There is a whole instruction set to design before anything is possible !
meynaf is offline  
Old 19 April 2020, 16:26   #110
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,323
Quote:
Originally Posted by AJCopland View Post
It's kind of like Gunnar in this thread saying that the Blitter just isn't useful because the CPU can do the job http://www.apollo-core.com/knowledge...1&x=1&z=Uwlo2i
The CPU can do every job, it's a general purpose piece of hardware, just not as well as dedicated hardware can.
When asked what can be done to make the Blitter better, the answer isn't to use the CPU, it's to fix it's deficiencies and to expand on it's capabilities to keep doing things that the CPU would be much slower at!
Btw i remember a discussion where Gunnar (and others) wanted a new instruction to ease blitting operations (this was loooong ago). I came up with exactly what they needed, but no - still rejected.

But he's not completely wrong. Read the whole thread.

A fact is that the Blitter isn't exactly fun to code on in comparison to the CPU.
Another is the drawback about multitasking, even if it's a minor one.
It is also true that if the cpu is fast enough so that it saturates the memory interface, the Blitter won't be any faster.
And a CPU is of course more flexible.

Honestly if i was against removing the blitter from the chipset, it was only for compatibility reasons.
meynaf is offline  
Old 19 April 2020, 20:45   #111
AJCopland
Registered User
 
Join Date: Sep 2013
Location: Beeston, Nottinghamshire, UK
Posts: 238
Quote:
Originally Posted by meynaf View Post
Btw i remember a discussion where Gunnar (and others) wanted a new instruction to ease blitting operations (this was loooong ago). I came up with exactly what they needed, but no - still rejected.

But he's not completely wrong. Read the whole thread.

A fact is that the Blitter isn't exactly fun to code on in comparison to the CPU.
Another is the drawback about multitasking, even if it's a minor one.
It is also true that if the cpu is fast enough so that it saturates the memory interface, the Blitter won't be any faster.
And a CPU is of course more flexible.

Honestly if i was against removing the blitter from the chipset, it was only for compatibility reasons.
For me that's an artefact of how un-Amiga the Vampire is though, it's a super-charged V12 on skateboard. So you start looking at everything through that lens where the skateboard (Amiga ECS/AGA) is useless so you should just use the V12 (Vampire) for everything.

It's exactly how Intel would like everything to be with the CPU doing everything (before they abandoned that idea at long last I should add).

If you have a moderately powerful CPU with FastRam, and a moderately powerful chipset with ChipRam, then your Blitter can saturate the ChipRam bus and the CPU can go and saturate the FastRam at the same time.
Much like a modern x86/64 has ram and the GPU has it's own pool of Gigabytes of ram.

It's why when I gave my ideal "new" Amiga I avoided anything more than a bump to the next proposed chipset of AA+ with a couple of hindsight tweaks and nothing more.

The Amiga is a product of it's time and the coupling of the hardware and software with that balance of mid/low CPU + chipset is what makes it stand out from the crowd. Get rid of that and you've just got yet another PC.
AJCopland is offline  
Old 19 April 2020, 21:00   #112
Gorf
Registered User
 
Gorf's Avatar
 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 2,295
Quote:
Originally Posted by meynaf View Post
Then there aren't less changes in your system than in the others. Because there aren't less switches.
there are less switches, because you do NOT need to switch to an other list.
And you do not need to flush the whole TLB or invalidate data, since all tasks live in the same context memory-wise, since wie are talking about a SASOS.

Quote:
Yes but permissions for all tasks in the system.
you are still thinking the other way around - it is not about the permission for all tasks, but about the permission of the current region ..

Quote:
Therefore, when looking for one permission for one access, we have to find out in which region it belongs, and then what the permissions are.
with is trivial, since the the task lives in the real address space - so we do know the region. And we can check - in many times even precheck - if the necessary permissions are there.
Also easy to look up, since the address is real and we can use the real (linear) address as index.
(minus some bits in the end)

Quote:
Ok so there is one permission list for every region.
no there is one line per region

Quote:
This appears to imply a region permissions list of variable size...
no

Quote:
Your post appeared to suggest otherwise...


Quote:
So there is a 1st level ! Wow. But, it's precisely that 1st level that's responsible of the whole performance ! The 2nd one is completely out of it.
Yet you were very silent about that 1st level.
and there are hundreds of other details I did not mention yet. So?
You draw again very strange conclusions about things I did not say anything about ...
Quote:
No. There is no single MMU table for everything here. There is one table per task and it is shorter than your big table containing your block of entries.
but the sum of tables contains more information, than my table. therefore it is longer and needs more memory. In our example 100% more.

Quote:
Overall, your lists will be shorter, but the valid data at one moment will be bigger because it will contain irrelevant permission values (those of not currently active tasks).
These permissions are at no time irrelevant, since the current task may try to read a location it has no permissions for.

Quote:
It is not impossible current cpu design allows quick return to same task without TLB invalidation.
halleluja!
that is exactly the problem with the current design.

For a MPU the opposite is true: it would be foolish to invalidate the whole TLB, since it probably contains permissions for often frequented regions like like eg gui-libraries, network message ports ... or the task before the active one which has high probability to me active again soon....

Quote:
Ouch. So when looking into the list, a lot of data regarding other tasks has to be read and canceled.
again - the information is not primarily about the tasks but about the region. And of course this does not have to be canceled.

Quote:
Not my definition of something performant.
Since when reading a whole pack of 32-bit blocks is better than finding the relevant data directly ?
these blocks are of course indexed by (parts of) the real address by its very nature, which makes it trivial.


Quote:
Basically you want to replace normal paging by flat space with simple access rights.
I don't think this is a good idea.
It's not significantly more performant than paging (this is at least questionable), and does not provide all advantages paging does (which is a sure thing).
it is more performent, since you can avoid TLB flushes, make good predictions, store everything in a 2nd level cache and so on.
Plus you get rid of address translations and it is easy and fast to share information.
It makes messaging fast and mitigates most of the overhead mentioned in the "Singularity" research paper.
(they are not the only ones - this overhead is eg. the reason why projects like OSv exist and actually get used.)


Quote:
Change in this context is just going to another MMU list.
This makes the TLB contents obsolete.
Your L1 cache will be obsolete as well.

no it will not - that is the whole point.

Quote:
Yes there will be some flush, even if it's just the L1 cache you talk about above.
you push only some new lines in the L1 cache - for small tasks the may be just one ore two lines. The rest of the 100-200 lines can stay. this is a vast improvement.

Quote:
From outside (= programmatic) point of view, probably. But the implementation has changed. Performance has changed. It's not relevant anymore.
it is still relevant (see OSv)

Quote:
Frankly, what is the link ? You can explain or you're just asserting it without proof ?
https://research.vu.nl/ws/portalfile...Socket+API.pdf

http://libos-nuse.github.io/files/netdev01-tazaki.pdf

Quote:
Even Windows task manager can show threads. And they don't need to show the number of context switches, just look at how many active tasks there are and compare with your number of cores (actually hardware threads).
if they do not show the number of context switches, you du not have anything to back up your claim, these would be less often on multi-core system.

(they still seem to be quite high and costly https://www.percona.com/blog/2017/11...text-switches/)

Quote:
Proof made that you have nothing concrete to show.
you seem to have severe misconception of the word "proof".
(see next answer)


Quote:
No. As said, PM is enough.
So in your logic that would be a "proof" that you have nothing concrete to show. You see?

Quote:
Don't tell me you will never have the time.
Again making strange assumptions ...

Quote:
And, by the way, why starting with the memory protection anyway ? There is a whole instruction set to design before anything is possible !
because like today CPUs are tailored towards the way memory management is handled in Windows and Linux my CPU needs to be tailored towards a SASOS.

Last edited by Gorf; 19 April 2020 at 22:20.
Gorf is offline  
Old 19 April 2020, 21:40   #113
Samurai_Crow
Total Chaos forever!
 
Samurai_Crow's Avatar
 
Join Date: Aug 2007
Location: Waterville, MN, USA
Age: 49
Posts: 2,187
I think what Gorf is proposing is a multithreaded single task OS. One table to rule them all. Not much to context switching because all threads share a context.
Samurai_Crow is offline  
Old 19 April 2020, 21:51   #114
dreadnought
Registered User
 
Join Date: Dec 2019
Location: Ur, Atlantis
Posts: 1,918
I had to press PageDown twice to get through Gorf's latest reply. What an epic derail/exchange, reminds me of the golden years of forums, when such things were the norm

Quote:
Originally Posted by AJCopland View Post
It's why when I gave my ideal "new" Amiga I avoided anything more than a bump to the next proposed chipset of AA+ with a couple of hindsight tweaks and nothing more.

The Amiga is a product of it's time and the coupling of the hardware and software with that balance of mid/low CPU + chipset is what makes it stand out from the crowd. Get rid of that and you've just got yet another PC.
I agree with this, and few other posters' idea re Next Amiga. Keep it old school, just slightly above the 2D level, with few bells and whistles otherwise it's just turning into another PC and also the cost would be astronomical.

Some sort of A1200+ACA melt would do me just fine.
dreadnought is offline  
Old 19 April 2020, 22:06   #115
Gorf
Registered User
 
Gorf's Avatar
 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 2,295
Quote:
Originally Posted by Samurai_Crow View Post
I think what Gorf is proposing is a multithreaded single task OS. One table to rule them all. Not much to context switching because all threads share a context.
you can look at it like this - with the addition of a permission based protection mechanism.

There is actually a term for that type of operating systems, which I used here previously - SASOS:

https://en.wikipedia.org/wiki/Single...erating_system

Last edited by Gorf; 19 April 2020 at 22:14.
Gorf is offline  
Old 19 April 2020, 22:11   #116
Gorf
Registered User
 
Gorf's Avatar
 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 2,295
Quote:
Originally Posted by dreadnought View Post
Some sort of A1200+ACA melt would do me just fine.
What does ACA stand for?
Gorf is offline  
Old 19 April 2020, 22:32   #117
Samurai_Crow
Total Chaos forever!
 
Samurai_Crow's Avatar
 
Join Date: Aug 2007
Location: Waterville, MN, USA
Age: 49
Posts: 2,187
@Gorf
ACA is an accelerator brand.
Samurai_Crow is offline  
Old 19 April 2020, 22:37   #118
Gorf
Registered User
 
Gorf's Avatar
 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 2,295
Quote:
Originally Posted by Samurai_Crow View Post
@Gorf
ACA is an accelerator brand.
ahhh - thank you!

so just a faster A1200 ... ok ...
Gorf is offline  
Old 20 April 2020, 09:12   #119
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,323
Quote:
Originally Posted by AJCopland View Post
For me that's an artefact of how un-Amiga the Vampire is though, it's a super-charged V12 on skateboard. So you start looking at everything through that lens where the skateboard (Amiga ECS/AGA) is useless so you should just use the V12 (Vampire) for everything.

It's exactly how Intel would like everything to be with the CPU doing everything (before they abandoned that idea at long last I should add).

If you have a moderately powerful CPU with FastRam, and a moderately powerful chipset with ChipRam, then your Blitter can saturate the ChipRam bus and the CPU can go and saturate the FastRam at the same time.
Much like a modern x86/64 has ram and the GPU has it's own pool of Gigabytes of ram.

It's why when I gave my ideal "new" Amiga I avoided anything more than a bump to the next proposed chipset of AA+ with a couple of hindsight tweaks and nothing more.

The Amiga is a product of it's time and the coupling of the hardware and software with that balance of mid/low CPU + chipset is what makes it stand out from the crowd. Get rid of that and you've just got yet another PC.
The Amiga is a fun machine to code on. The PC isn't. But removing the Blitter won't make the Amiga less fun ; actually i just never used it in my programs !
So i kinda understand what Gunnar does, except i don't agree with his additions which are targeted toward performance only and not programming flexibility nor code density (or mere design elegance).



Quote:
Originally Posted by Gorf View Post
there are less switches, because you do NOT need to switch to an other list.
And you do not need to flush the whole TLB or invalidate data, since all tasks live in the same context memory-wise, since wie are talking about a SASOS.
But nothing prevents current systems to do the same. The only difference is the presence of address translation.


Quote:
Originally Posted by Gorf View Post
you are still thinking the other way around - it is not about the permission for all tasks, but about the permission of the current region ..
What you've just written here implies the permissions for a given region does not depend on which task attempts to access it...


Quote:
Originally Posted by Gorf View Post
with is trivial, since the the task lives in the real address space - so we do know the region. And we can check - in many times even precheck - if the necessary permissions are there.
Also easy to look up, since the address is real and we can use the real (linear) address as index.
(minus some bits in the end)
Apart that you can not use the address as index and get all the relevant info without indirection. Because if you do, the accessed data depends solely on the region, not on the task accessing it.


Quote:
Originally Posted by Gorf View Post
no there is one line per region
So there is single data per region and therefore the access rights can not be for more than a single task or all tasks at once. No two tasks can see same region with different access level.


Quote:
Originally Posted by Gorf View Post
no
Then what you write does not reflect what you think.


Quote:
Originally Posted by Gorf View Post
and there are hundreds of other details I did not mention yet. So?
You draw again very strange conclusions about things I did not say anything about ...
Then perhaps you should tell about some of these things. You see, the ones that if hidden will lead to misunderstandings.


Quote:
Originally Posted by Gorf View Post
but the sum of tables contains more information, than my table. therefore it is longer and needs more memory. In our example 100% more.
But we don't care about memory ! We have more than enough. What we care about is the size of what is currently relevant for the task being executed.


Quote:
Originally Posted by Gorf View Post
These permissions are at no time irrelevant, since the current task may try to read a location it has no permissions for.
Be realistic, what relevance for a task A is the fact that tasks B and C have access to some area and not task A itself ?


Quote:
Originally Posted by Gorf View Post
halleluja!
that is exactly the problem with the current design.

For a MPU the opposite is true: it would be foolish to invalidate the whole TLB, since it probably contains permissions for often frequented regions like like eg gui-libraries, network message ports ... or the task before the active one which has high probability to me active again soon....
You did misread intentionnally or not ?
What i wrote is precisely that current systems may well be able to not invalidate anything, hence the problem is gone !


Quote:
Originally Posted by Gorf View Post
again - the information is not primarily about the tasks but about the region. And of course this does not have to be canceled.
Nope - again it is irrelevant for a task that other tasks are able to access a given region or not.


Quote:
Originally Posted by Gorf View Post
these blocks are of course indexed by (parts of) the real address by its very nature, which makes it trivial.
Nah. If you index by address bits - like everyone else - then the information you can get is only for ONE task (as task ID is indeed in your data). And there are chances it's not current one.


Quote:
Originally Posted by Gorf View Post
it is more performent, since you can avoid TLB flushes, make good predictions, store everything in a 2nd level cache and so on.
Plus you get rid of address translations and it is easy and fast to share information.
It makes messaging fast and mitigates most of the overhead mentioned in the "Singularity" research paper.
(they are not the only ones - this overhead is eg. the reason why projects like OSv exist and actually get used.)
So what ? If you think 25-33% cpu speed is lost in memory protection today, you've been left behind a few decades !
So big deal, you have 0.25% loss instead of 0.5% (and i'm not even sure these numbers aren't a lot higher than the real ones).


Quote:
Originally Posted by Gorf View Post

no it will not - that is the whole point.
Data regarding other tasks has a lot of chances to be obsolete.


Quote:
Originally Posted by Gorf View Post
you push only some new lines in the L1 cache - for small tasks the may be just one ore two lines. The rest of the 100-200 lines can stay. this is a vast improvement.
And again, the same can be done with regular memory protection.
You can not quantify the gain your system can have. But what be quantified is the loss of features it involves.


Quote:
Originally Posted by Gorf View Post
it is still relevant (see OSv)
OSv isn't general purpose OS attempting to replace currently existing ones. Perhaps it's even dead or old, i dont know : i couldn't even find a Wikipedia page for it.
So no, it's no longer relevant.


Nothing on topic. These papers are irrelevant to the subject at hand.


Quote:
Originally Posted by Gorf View Post
if they do not show the number of context switches, you du not have anything to back up your claim, these would be less often on multi-core system.
If you have several tasks running at once, you mechanically have less context switches, this is bashing open doors !


Quote:
Originally Posted by Gorf View Post
(they still seem to be quite high and costly https://www.percona.com/blog/2017/11...text-switches/)
They are counted to see if there are contention issues in a dbase. Big deal.


Quote:
Originally Posted by Gorf View Post
you seem to have severe misconception of the word "proof".
(see next answer)
This is intellectually dishonest


Quote:
Originally Posted by Gorf View Post
So in your logic that would be a "proof" that you have nothing concrete to show. You see?
Complete nonsense. A proof would be there if you PM me like i indicated was doable and then find out there is indeed nothing. But you know full well there is something and so you don't do it.


Quote:
Originally Posted by Gorf View Post
Again making strange assumptions ...
I didn't make any assumption ! You will have the time someday, YES or NO ???
If answer is YES, then use it to produce something concrete.
IF NO, you have better use of it and you're wasting it here.


Quote:
Originally Posted by Gorf View Post
because like today CPUs are tailored towards the way memory management is handled in Windows and Linux my CPU needs to be tailored towards a SASOS.
Tailoring a CPU toward a specific OS is a big, big mistake IMO. It ceases to be general purpose. There are things it will never be able to do (unless late additions that totally destroy the concept).
meynaf is offline  
Old 20 April 2020, 11:49   #120
gimbal
cheeky scoundrel
 
gimbal's Avatar
 
Join Date: Nov 2004
Location: Spijkenisse/Netherlands
Age: 42
Posts: 6,919
Agree to disagree dudes, at some point you're just having a personal argument out in the open and are making it very hard for other people to get a word in. Plus the risk of causing offence just rises and rises.
gimbal is offline  
 


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools

Similar Threads
Thread Thread Starter Forum Replies Last Post
Another chance to buy the RG Amiga Book LuMan News 10 03 March 2016 18:21
Chance to pick up a REAL amiga - worth it? Critic Amiga scene 47 16 November 2013 14:46
Which FPGA Implementation of Amiga do you think has the best chance? digiflip Amiga scene 4 29 May 2011 08:31
Any chance of an iPhone Amiga emulator? RabidRabbit support.OtherUAE 10 03 July 2010 11:15
I want to give Amiga Emulation one last chance, please help (WoT) GurrenLagann New to Emulation or Amiga scene 15 27 April 2008 12:14

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +2. The time now is 20:39.

Top

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.
Page generated in 0.12960 seconds with 15 queries