English Amiga Board


Go Back   English Amiga Board > Main > Retrogaming General Discussion

 
 
Thread Tools
Old 16 April 2020, 11:39   #41
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,355
Quote:
Originally Posted by Valken View Post
I meant for AmigaNext to go from 68K to PPC or related family CPU to maintain the current AmigaOS libary from 3.x to 4.x seamlessly.
Going from 68k to PPC is already a mistake. 68k is abandoned but PPC now also is.
During all that time, bare metal coders like 68k due to the way it is coded in asm, and dislike the ppc for the same reason.
If you switch from 68k to anything else (that's not designed with the same easy-to-code goal), many coders will simply not follow.


Quote:
Originally Posted by Valken View Post
Near bare metal speeds.
Speed, speed, speed ! The altar on which to sacrifice everything else !
Ease to code ? No care. Reliability ? Meh. Durability ? Not a target. Elegance of architecture ? What are you talking about.

Furthermore, we already have speed. Take some uae, activate jit and max speed.
Perhaps it is time to setup another goal.


Quote:
Originally Posted by Valken View Post
Apple did that.
And then they turned to x86 up to the point Macs could run Windows.
Nothing is left from original Macs. You really want that nothing is left from original Amigas ?


Quote:
Originally Posted by Valken View Post
Now we have 4 different platforms that claims to be Amiga:

OS 3.x 68K
OS 4.x PPC only
Aros 68K and x86. Unsure of PPC
MorphOS PPC only
Amiga isn't just an operating system. This is easy to prove, as a lot of software actually doesn't even make use of it.
Therefore, among your list i think only 1 item is more than just a claim.


Quote:
Originally Posted by Valken View Post
Ideally it would be a Next Gen plaform that would run OS 3.x to 4.x to 5.x seamlessly. Just click on the files and run it.
"Just click on the files and run it" already exists. It is called UAE.


Quote:
Originally Posted by Valken View Post
Then allow Amiga developers to adapt NEW tech on top of it.
And then discover that they actually do not follow, for some mysterious reason.


Quote:
Originally Posted by Valken View Post
Hence I said perhaps to target relatively new mass produced PPC machines such as PS3 or Xbox 360 because its cheap, plenty and available.
PPC machines are no longer mass produced i'm afraid... PS3 and Xbox 360 aren't new anymore either.

We don't need something that's only cheap, plenty and available - the PC is already this. Or even smartphones...
We need something we like to use and code on. Something that allows both running a multitasking OS and bang on the hardware. Something we know how it's working, something that does not do things behind our back.

Why wanting to use PPC anyway ? Only valid reason seems raw speed. As if your criterion is availability or price, you should rather use ARM (for everything else, just keep 68k).
But even the fact PPC is faster than 68k is questionable. 68080 beats the crap out of similarly clocked PPC (even higher clocked ones). FPGA implementation of PPC on same chip, is also beaten.

Modern ARM can compete with PPC.
And modern x86 beats them both.
And if a 68k could be implemented so that it fully uses todays chip tech - it would then beat x86.
This, for a simple reason : IPC (instructions executed by clock) is today more or less constant, regardless of the instruction set (for same quality of implementation). So if you can do the same work with less instructions, you end up faster. As easy as that.

I think wanting to use PPC today is nonsense.
Not only PPC itself is abandoned, whole Power family is also falling out of the parade. Look at the 100 most powerful supercomputers and see that it is now near anecdotic in its own niche.
The fight is now x86 vs Arm. The rest is either history or hopes in our hearts.



Quote:
Originally Posted by Steril707 View Post
The scene largely went back to 68k in the last few years, though.
There is perhaps a good reason for this.
meynaf is offline  
Old 16 April 2020, 12:29   #42
Valken
Registered User
 
Join Date: Feb 2009
Location: Amiga
Posts: 465
That was a long post from you meynaf and while I understand that you as a coder want a 68K hobby box which is available in all the HW projects, I don't see that as a viable long term business model to build up a sizable user base.

The mention of the PPC cpu was that it offered more coding options up to 64bits and can be very compatible with 68K and is even cheaper than a x86 cpu.

The PPC price would help consumers jump on earlier, provide them access to wider library of usable software at decent speeds due to the point that it can emulate the 68k with good performance even running at low speeds.

Sure we can run UAE on ARM or x86 but no nextgen or extended Amiga software will come out of it as it will use instruction sets foreign to OS4 or MorphOS unless someone ports an AmigaOS over.

AROS might work but it is far from Amiga game friendly in native mode. It will be limited at best to a 060 + RTG emulated state which is no different than running UAE within another app. It is not seamless.

When I look at the SpectrumNext and see it running older games AND newer games, it shows what extending the platform can do.

I was not making an argument to only consider PPC. I was saying it was widely used, there are coders who know it and it can provide a good base to run existing Amiga software similar to how PowerMacs ran Mac 68K natively. Apple users didn't need to run an emulator then a Mac 68K application within it like a VM.

Actually, PPC is still used in many embedded systems. We can consider ARM but I do not know of an AmigaOS port to ARM so cannot try it. It still has the limit of running UAE which I rather run off a Mac or PC than on a mobile device.
Valken is offline  
Old 16 April 2020, 13:40   #43
E-Penguin
Banana
 
E-Penguin's Avatar
 
Join Date: Jul 2016
Location: Darmstadt
Posts: 1,217
Quote:
Originally Posted by -Acid- View Post
If i was designing an Amiga now I would scrap any idea of custom chips and use off the shelf PC parts. Custom chips are what finished the Amiga's competitive edge in the end once hardware manufacturers caught up overtook and left the Amiga behind in the dust. It would be a waste of money making custom chips now when you can just use existing processors and gpu's etc that other companies have already spent years and £millions developing, there is no way anyone could start from scratch now and design something to outperform today's hardware.
This. There's no way to compete with Intel/AMD/ARM on CPUs and GPUs and produce anything remotely competitive in both performance and price.

So off-the-shelf gubbins like Apple are doing, plus a bunch of FPGAs which can be programmed by the OS to run AmigaOS classic in a Vampire-esque environment. FPGAs open the window to OCS, ECS, AGA, AMMX, and further developments. Maybe put a Zorro bus on it to allow classic expansions.

Doesn't the X-5000 have something FPGA-like in it? Did it ever get used?
E-Penguin is offline  
Old 16 April 2020, 13:51   #44
Nosferax
Registered User
 
Join Date: Apr 2015
Location: Beauharnois,Qc,Canada
Posts: 227
Quote:
Originally Posted by Valken View Post
I would take the Macintosh approach when they moved from 68K to PPC then to x86.

Apple made it so the newer CPU architecture did not alien the older software with an emulation layer. That was what made PPC popular besides being faster clock to clock against 486 at the time, maybe even on PAR with Pentium.

All old 68K software, fully compatible with new Finder or updated with patches
PPC cpu, near baremetal 68K emulator, can run both PPC and 68K code.
New modern bus, and ports for updated peripherals.
New hires screen modes, more audio channels and eventually 3D.

If we can EXTEND the Amiga through AmigaNext, with 99% software compatibility + near modern features, that may bring it back.

Hence I was looking at PS3/Xbox 360 for cheap HW + a port of AmigaOS or Aros or MorphOS to it as a base.
Nothing is preventing someone from building a linux box and run FS-UAE at boot full screen... You then get a modern PC with the ability to run Amiga software.
Nosferax is offline  
Old 16 April 2020, 13:54   #45
Samurai_Crow
Total Chaos forever!
 
Samurai_Crow's Avatar
 
Join Date: Aug 2007
Location: Waterville, MN, USA
Age: 49
Posts: 2,200
The SAM460 and X5000 both have FPGAs but don't have them wired in for use as chipset replacements.

A Vampire made in ASIC form would clobber PPC speedwise. That's what I would like to see. A reverse JIT that translates PPC to 68080 would do the job of compatibility with MorphOS and AmigaOS 4.1 since QEmu already provides that.
Samurai_Crow is offline  
Old 16 April 2020, 13:55   #46
coldacid
WinUAE 4000/40, V4SA
 
coldacid's Avatar
 
Join Date: Apr 2020
Location: East of Oshawa
Posts: 538
Quote:
Originally Posted by E-Penguin View Post
Doesn't the X-5000 have something FPGA-like in it? Did it ever get used?

X1000 and X5000 both, the "Xena" coprocessor chip. And I've never heard of anything that actually used it.
coldacid is offline  
Old 16 April 2020, 14:09   #47
Tigerskunk
Inviyya Dude!
 
Tigerskunk's Avatar
 
Join Date: Sep 2016
Location: Amiga Island
Posts: 2,797
Quote:
Originally Posted by Valken View Post
That was a long post from you meynaf and while I understand that you as a coder want a 68K hobby box which is available in all the HW projects, I don't see that as a viable long term business model to build up a sizable user base.
.
And that's where you are thinking wrong.
The Amiga has tried to go that route and it failed already.
It is a hobby box these days.

And I (and most other people) am cool with that.

In all the time I have been following the Amiga scene people have never been more relaxed about it than now that people gave up that pipe dream of the Amiga needing to challenge Apple and Microsoft.
Tigerskunk is offline  
Old 16 April 2020, 14:15   #48
Gorf
Registered User
 
Gorf's Avatar
 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 2,426
There are two or three different "Amiga" I would dream of.

The first would be of course a faster and more capable version of the original hardware, that runs the original software. Something like the Vampire in ASIC.
A 68080 with around 1GHz.
(Maybe some AAA and Hombre custom chips features integrated as a bonus...)
the next step for such a system would be a 64bit mode for the CPU...


The other approach would be something totally new in the spirit of the Amiga - but breaking compatibility. Hardware and software.
A computer that brings back the fun to computing and programming on all levels:

The CPU needs to be easy to program in assembler and provide a high code density.
Probably very close to the 68K, with some ideas from the NS320xx (even more orthogonality),
the Z8000 (flexible register file) and
the SuperH (as for the compact instruction encoding, not the risc part!)

It would have a flat single address space and a MPU (memory protection unit) would provide security and process isolation rather than a traditional MMU. We could take this idea from the Mill CPU.
Here is Gandalf ... Albus Dumbledore ... Ivan Godard explaining how it would work:
https://millcomputing.com/docs/security/

and here how the CPU architecture can assist in fast IPC:
https://millcomputing.com/docs/inter...communication/

There would be no SSE or MMX unit in the CPU but a dedicates Vector coprocessor with its own DMA:
http://vectorblox.github.io/mxp/mxp_reference.html
Why? because these SIMD and MIMD instructions can take "forever" and are bad for fast interrupts and context switches (that is e.g. why these instructions are turned off in the Linux kernel itself ...)
The MXP would act like the Copper: tell it what to do and let it run. The OS would manage, what task can use this resource ...

And for the OS itself ... that would take a lot of space to describe!
Just a hint: messages! fast messages! - everything handled by messages.
Gorf is offline  
Old 16 April 2020, 14:25   #49
Gorf
Registered User
 
Gorf's Avatar
 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 2,426
Quote:
Originally Posted by coldacid View Post
X1000 and X5000 both, the "Xena" coprocessor chip. And I've never heard of anything that actually used it.
Yes, this was very euphemistic thinking: "let's just include it, some will use it for sure"
They should have provided at least ONE practical use case ...
But as far as I know there is not even a "xena.library" in the OS, is there?

dead end anyway...
Gorf is offline  
Old 16 April 2020, 15:16   #50
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,355
Quote:
Originally Posted by Valken View Post
That was a long post from you meynaf and while I understand that you as a coder want a 68K hobby box which is available in all the HW projects, I don't see that as a viable long term business model to build up a sizable user base.
Build up a sizable user base ? Of course it does not. Nothing will.


Quote:
Originally Posted by Valken View Post
The mention of the PPC cpu was that it offered more coding options up to 64bits and can be very compatible with 68K and is even cheaper than a x86 cpu.

The PPC price would help consumers jump on earlier, provide them access to wider library of usable software at decent speeds due to the point that it can emulate the 68k with good performance even running at low speeds.
I don't think PPC is cheaper than x86. And even if it is, its price can only rise because neither Macs nor consoles use it any longer.


Quote:
Originally Posted by Valken View Post
Sure we can run UAE on ARM or x86 but no nextgen or extended Amiga software will come out of it as it will use instruction sets foreign to OS4 or MorphOS unless someone ports an AmigaOS over.
Instruction sets are hardly a problem. Nearly nobody would write software in asm for these...


Quote:
Originally Posted by Valken View Post
When I look at the SpectrumNext and see it running older games AND newer games, it shows what extending the platform can do.
Extending 8-bit machines seems quite different to extending the Amiga. They are somewhat simpler.


Quote:
Originally Posted by Valken View Post
I was not making an argument to only consider PPC. I was saying it was widely used, there are coders who know it and it can provide a good base to run existing Amiga software similar to how PowerMacs ran Mac 68K natively. Apple users didn't need to run an emulator then a Mac 68K application within it like a VM.
While it could run Mac 68k apps, it didn't do so faster than true 68k - actually, slower. It was faster only for apps compiled for it.
Now PPC already exists in some Amiga accelerator boards, but their success has been quite limited and isn't growing.


Quote:
Originally Posted by Valken View Post
Actually, PPC is still used in many embedded systems.
So is 68k.


Quote:
Originally Posted by Valken View Post
We can consider ARM but I do not know of an AmigaOS port to ARM so cannot try it. It still has the limit of running UAE which I rather run off a Mac or PC than on a mobile device.
Again, the AmigaOS alone does not make an Amiga. Porting it to another platform is meaningless.



Quote:
Originally Posted by Gorf View Post
the next step for such a system would be a 64bit mode for the CPU...
This already exists in the 68080, unless it has been canceled without me knowing...


Quote:
Originally Posted by Gorf View Post
The CPU needs to be easy to program in assembler and provide a high code density.
This is the 68k's original design goal. True, nothing can fully replace it without that.
It would be very cool to have a new ISA that uses 68k as great inspiration and takes what's good in various architectures.
I could even design such an instruction set myself to achieve these goals, but who will implement it ?


Quote:
Originally Posted by Gorf View Post
The MXP would act like the Copper: tell it what to do and let it run. The OS would manage, what task can use this resource ...
Hey, wait. Isn't this exactly what a GPU does ?
meynaf is offline  
Old 16 April 2020, 15:49   #51
Gorf
Registered User
 
Gorf's Avatar
 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 2,426
Quote:
Originally Posted by meynaf View Post
This already exists in the 68080, unless it has been canceled without me knowing...
I know it is said to be 64bits "internally" but I do not see how this is exposed yet ...
There are the well known pros and cons for going 64bit ... and I am not sure if it is really needed in terms of address space ..
Quote:
This is the 68k's original design goal. True, nothing can fully replace it without that.
It would be very cool to have a new ISA that uses 68k as great inspiration and takes what's good in various architectures.
I could even design such an instruction set myself to achieve these goals, but who will implement it ?
Gunnar?
(I am right now reading a lot about RiscV architecture and how to implement it ... but I am far away from being able to do a good VHDL or Verilog design ... maybe some AI or ML approach will offer us a more high level design path.. tell it what it is supposed to do and let the AI figure out the best design...)


Quote:
Hey, wait. Isn't this exactly what a GPU does ?
It works more like the old school vector units in mainframes ...
it is a little bit more general purpose than the usual GPU, but can of course be used as such as well.
Imagine you put your MMX/SSE/AltiVec units, your GPU´s shader units and a DSP into one dedicated co-processor (or a bunch of them)

On the other hand Nvidia announced to put a RiscV core in each shader cluster in the "Falcon"... giving it the same capabilities.

So on the plus for VectorBlox MXP: it has a much nicer instruction set!
And that is what we want, isn't it?
Gorf is offline  
Old 16 April 2020, 16:18   #52
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,355
Quote:
Originally Posted by Gorf View Post
I know it is said to be 64bits "internally" but I do not see how this is exposed yet ...
IIRC registers have been extended to 64-bit and a few encodings exploit this.
That said, to my knowledge nobody used this extension, not even its creators. It could have been deleted without anyone noticing...


Quote:
Originally Posted by Gorf View Post
There are the well known pros and cons for going 64bit ... and I am not sure if it is really needed in terms of address space ..
Address space is the biggest, if not the single, reason for going 64bit. Else it would be more or less useless.
Actually, my ideal cpu would use a hybrid 32/64 way. Full 64-bit overloads the instruction encodings more than it's worth.


Quote:
Originally Posted by Gorf View Post
Gunnar?
Anyone but him ! He already rejected my suggested changes about 68k and our disagreements had me leave the team...


Quote:
Originally Posted by Gorf View Post
(I am right now reading a lot about RiscV architecture and how to implement it ... but I am far away from being able to do a good VHDL or Verilog design ... maybe some AI or ML approach will offer us a more high level design path.. tell it what it is supposed to do and let the AI figure out the best design...)
RiscV is designed with ease of implementation in mind, nothing like what we would want.
And don't count on the AI or anything similar, for this level of complexity it's still sci-fi


Quote:
Originally Posted by Gorf View Post
It works more like the old school vector units in mainframes ...
it is a little bit more general purpose than the usual GPU, but can of course used for then as well.
Imagine you put your MMX/SSE/AltiVec units, your GPU´s shader units and a DSP into one dedicated co-processor (or a bunch of them)

On the other hand Nvidia announced to put a RiscV core in each shader cluster in the "Falcon"... giving it the same capabilities.
Well, then, it is more like GPGPU, but anyway, the principle remains the same : put it out of normal CPU so that both can work in parallel (and it doesn't clutter the regular instruction set).


Quote:
Originally Posted by Gorf View Post
So on the plus for VectorBlox MXP: it has a much nicer instruction set!
And that is what we want, isn't it?
Yes but we hardly touch the GPU's instruction set directly nowadays, do we ?
If we could have some API on top of it, then switching to a more powerful but incompatible one would be easy and benefitting to existing apps as well as new ones.
meynaf is offline  
Old 16 April 2020, 16:26   #53
roondar
Registered User
 
Join Date: Jul 2015
Location: The Netherlands
Posts: 3,437
To be absolutely honest, if I had all the money in the world to make a "new Amiga" right now, I'd probably make err an old Amiga, but with new components... Basically I'd rebuild the A500/A1200 with maybe some small changes in terms of storage options. If I was really flush I'd probably push for remaking new 3 1/2" floppies

Then again, I'm kinda a crazy retro obsessed guy so yeah.
roondar is offline  
Old 16 April 2020, 17:27   #54
Gorf
Registered User
 
Gorf's Avatar
 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 2,426
Quote:
Originally Posted by meynaf View Post
Address space is the biggest, if not the single, reason for going 64bit. Else it would be more or less useless.
Actually, my ideal cpu would use a hybrid 32/64 way. Full 64-bit overloads the instruction encodings more than it's worth.
you are totally right here from that point of view ...
My only concern here is the ability for fast messaging by handing over pointers to data structures (remember: single address space!)
So how are we doing that in a hybrid design?
If the program is "living" in its 32bit-address-space, how can it hand over its real pointers in the 64bit space?
but maybe there is a simple solution I just don't see...

Quote:
Anyone but him ! He already rejected my suggested changes about 68k and our disagreements had me leave the team...
Realistically this would have to start in VM/emulation anyways ... and in the light of the success of UAE and even "fantasy computers" like the Pico-8, the question arises if it really needs to be real hardware...
(I still would long of the real thing, but a virtual dream Amiga-like computer seams better than no such computer at all.)
Quote:
RiscV is designed with ease of implementation in mind, nothing like what we would want.
I know, but it gives me the chance to better understand, how the implementation process from specs to real hardware works.

Quote:
And don't count on the AI or anything similar, for this level of complexity it's still sci-fi


Quote:
Well, then, it is more like GPGPU, but anyway, the principle remains the same : put it out of normal CPU so that both can work in parallel (and it doesn't clutter the regular instruction set).
right.
Quote:
Yes but we hardly touch the GPU's instruction set directly nowadays, do we ?
but shouldn't we?
we don't, because it would be no fun and/or because we do not even have direkt access but are restricted to the models and libraries a vendor like Nvidia lets us see...

Quote:
If we could have some API on top of it, then switching to a more powerful but incompatible one would be easy and benefitting to existing apps as well as new ones.
Do we need that?
It's not really about more and more power, is it?
and also: someone would have to adjust the backend to the new (proprietary) hardware ... and we all know how this ends ...

Last edited by Gorf; 16 April 2020 at 17:33.
Gorf is offline  
Old 16 April 2020, 18:37   #55
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,355
Quote:
Originally Posted by Gorf View Post
you are totally right here from that point of view ...
My only concern here is the ability for fast messaging by handing over pointers to data structures (remember: single address space!)
So how are we doing that in a hybrid design?
If the program is "living" in its 32bit-address-space, how can it hand over its real pointers in the 64bit space?
but maybe there is a simple solution I just don't see...
It's true the 32/64 approach i'm thinking about, is incompatible with single address space.
However, the OS could do both. I mean, when we're on 64-bit system with lots of mem, we live on a powerful machine which can handle heavier message passing. No single address space in this case. But if we're on a more modest 32-bit one (even if it's just from memory size point of view), the message is passed directly, in single address space.


Quote:
Originally Posted by Gorf View Post
Realistically this would have to start in VM/emulation anyways ... and in the light of the success of UAE and even "fantasy computers" like the Pico-8, the question arises if it really needs to be real hardware...
(I still would long of the real thing, but a virtual dream Amiga-like computer seams better than no such computer at all.)
I agree doing it in pure SW seems a good way for us coders to get what we want.
However, writing such a VM is no easy task, especially if you want it to be fast.
It took many years of development before UAE got its current level.
I can do, and have done, a small VM in 68k asm. But implementing cpu emulation in a more portable way (like in C) is for me an horror story. And having to write x86 or Arm JIT is giving me the creeps.


Quote:
Originally Posted by Gorf View Post
but shouldn't we?
we don't, because it would be no fun and/or because we do not even have direkt access but are restricted to the models and libraries a vendor like Nvidia lets us see...
Should we allow direct gpu code or not ?
Even if we have a better instruction set than what currently exists, it will never value the main cpu in that aspect (at least, not the main cpu in the way i see it).


Quote:
Originally Posted by Gorf View Post
Do we need that?
It's not really about more and more power, is it?
and also: someone would have to adjust the backend to the new (proprietary) hardware ... and we all know how this ends ...
That's a good question. Do we want it to ever evolve ?
meynaf is offline  
Old 16 April 2020, 19:59   #56
Samurai_Crow
Total Chaos forever!
 
Samurai_Crow's Avatar
 
Join Date: Aug 2007
Location: Waterville, MN, USA
Age: 49
Posts: 2,200
Re:Hardware banging vs. "System friendly"

AmigaDE had a good compromise between hardware banging and system friendliness: macro-substitute hardware banging code from the drivers into the executable at load time and cache the resultant executable to disk. The dependency checking would just be like a makefile. Maybe dependency checking could be customized to not depend on datestamps as much as revision numbers.
Samurai_Crow is offline  
Old 16 April 2020, 20:01   #57
Gorf
Registered User
 
Gorf's Avatar
 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 2,426
Quote:
Originally Posted by meynaf View Post
It's true the 32/64 approach i'm thinking about, is incompatible with single address space.
However, the OS could do both. I mean, when we're on 64-bit system with lots of mem, we live on a powerful machine which can handle heavier message passing. No single address space in this case.
that would contradict the plans I have for the OS and the IPC and remote calls... and the plan to make an elegant and transparent implementation...
Maybe the 32bit CPU could hold one additional 64bit wide "real pointer" for that purpose
Quote:
I agree doing it in pure SW seems a good way for us coders to get what we want.
However, writing such a VM is no easy task, especially if you want it to be fast.
It took many years of development before UAE got its current level.
I can do, and have done, a small VM in 68k asm. But implementing cpu emulation in a more portable way (like in C) is for me an horror story. And having to write x86 or Arm JIT is giving me the creeps.
How about Michal Schulz's 68k-JIT for big endian ARM?
This is a straight forward implementation and could probably be adjusted for slightly different 68k-like CPU.. still not pleasant, but probably easier than starting from scratch.

Quote:
Should we allow direct gpu code or not ?
Even if we have a better instruction set than what currently exists, it will never value the main cpu in that aspect (at least, not the main cpu in the way i see it).
that depends how our GPU should look like ... I would say yes it should be directly accessible and should be easy enough to do so and pleasant enough that people want to do so...
that means probably less features and less powerful than the gfx-monsters we got today, but still much more powerful than any classic chipset.

(for me it would be mainly 2D and offer a lot of "playfields" or layers, a display list to stitch different parts and objects together, and a "Blitter" that can transform/zoom/rotate these objects...
the 3D part could be a pure voxel-space)

Quote:
That's a good question. Do we want it to ever evolve ?
I would say: no.
It should be "as is" and stay that way. (But allow for additional co-processors like video-codecs and more parallelism)
It should be something that can stay for another 30 years, like the original Amiga.

Last edited by Gorf; 16 April 2020 at 20:14.
Gorf is offline  
Old 16 April 2020, 21:03   #58
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,355
Quote:
Originally Posted by Gorf View Post
that would contradict the plans I have for the OS and the IPC and remote calls... and the plan to make an elegant and transparent implementation...
Maybe the 32bit CPU could hold one additional 64bit wide "real pointer" for that purpose
What plans ? In which ways would they be contradicted ?


Quote:
Originally Posted by Gorf View Post
How about Michal Schulz's 68k-JIT for big endian ARM?
This is a straight forward implementation and could probably be adjusted for slightly different 68k-like CPU.. still not pleasant, but probably easier than starting from scratch.
It sounds more than unpleasant. Aside of the fact current ARM uses are little endian, understanding existing software is often more complicated than writing it from scratch. Not to mention it still requires some deep knowledge (of ARM) to make important changes - i would even prefer having to do that on x86...

After that, of course there is the problem of emulating the hardware - or at least writing some kind of place holder OS to run on the host.

Doing all these on 68k would be a lot easier to me, as i would know both source and target platforms good enough.

I admit i don't the heck know how to write a JIT, i just can't solve the problems with having more registers than the host, the branches whose target can't be known in advance, the differences in ccr handling, when to start and stop the conversion, where to put the resulting code in memory, etc.
Some kind of place holder code doing that for a simple case would help.
Then doing the changes step by step becomes possible (i suppose).
Let's say i want to add some unexisting instruction or addressing mode to current 68k and i find an encoding for it : if the jit has to write larger code, i'm blocked.
meynaf is offline  
Old 16 April 2020, 22:21   #59
Samurai_Crow
Total Chaos forever!
 
Samurai_Crow's Avatar
 
Join Date: Aug 2007
Location: Waterville, MN, USA
Age: 49
Posts: 2,200
Some older ARM CPUs still have a big-endian mode. Likewise 64 bit ARM has 31 general-purpose registers unlike the 32-bit version, which has 15. 68 Emu uses a hash table to keep track of the 68k registers on ARM7 32 bit but 64 bit is both faster and has more registers on the PineBook Pro laptop that Michal Schulz uses.
Samurai_Crow is offline  
Old 16 April 2020, 22:40   #60
Gorf
Registered User
 
Gorf's Avatar
 
Join Date: May 2017
Location: Munich/Bavaria
Posts: 2,426
Quote:
Originally Posted by meynaf View Post
What plans ? In which ways would they be contradicted ?
in my secret ones

as mentioned before "my" OS is heavily based on message passing - very fast message passing. Lots of things, that are now done via library calls would be replaced with messages. almost everything system related and everything hardware access related - except you grant a trustworthy app the rights for direct hardware access.

That is done to provide a simpler and more elegant way of multiprocessing and resource management and tracing ... as well as a simpler way of sandboxing.

So the whole system performance relies on the speed messages can be exchanged... that's why it needs a single address space and fast context switching.
Quote:
It sounds more than unpleasant. Aside of the fact current ARM uses are little endian,
it is bi-endian
and Michal's "Emu68" runs under big-endian AROS for ARM.

Quote:
After that, of course there is the problem of emulating the hardware - or at least writing some kind of place holder OS to run on the host.
sure. but doing it in VHDL for an FPGA is not really simpler...

Quote:
Doing all these on 68k would be a lot easier to me, as i would know both source and target platforms good enough.

I admit i don't the heck know how to write a JIT, i just can't solve the problems with having more registers than the host,....
how many registers would your CPU have?

I think Elate/Tao's vm was very register heavy ... but I don't know how they did it.
ParrotVM is also supposed to have many registers ... it is open source, but I did not really look into it. From the docu:

Quote:
For instance, the Parrot VM will have a register architecture, rather than a stack architecture. It will also have extremely low-level operations, more similar to Java's than the medium-level ops of Perl and Python and the like.

The reasoning for this decision is primarily that by resembling the underlying hardware to some extent, it's possible to compile down Parrot bytecode to efficient native machine language.

Moreover, many programs in high-level languages consist of nested function and method calls, sometimes with lexical variables to hold intermediate results. Under non-JIT settings, a stack-based VM will be popping and then pushing the same operands many times, while a register-based VM will simply allocate the right amount of registers and operate on them, which can significantly reduce the amount of operations and CPU time.

To be more specific about the software CPU, it will contain a large number of registers. The current design provides for four groups of N registers; each group will hold a different data type: integers, floating-point numbers, strings, and PMCs. (Polymorphic Containers, detailed below.)

Registers will be stored in register frames, which can be pushed and popped onto the register stack. For instance, a subroutine or a block might need its own register frame.
http://www.parrot.org

Last edited by Gorf; 16 April 2020 at 23:05.
Gorf is offline  
 


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools

Similar Threads
Thread Thread Starter Forum Replies Last Post
Another chance to buy the RG Amiga Book LuMan News 10 03 March 2016 18:21
Chance to pick up a REAL amiga - worth it? Critic Amiga scene 47 16 November 2013 14:46
Which FPGA Implementation of Amiga do you think has the best chance? digiflip Amiga scene 4 29 May 2011 08:31
Any chance of an iPhone Amiga emulator? RabidRabbit support.OtherUAE 10 03 July 2010 11:15
I want to give Amiga Emulation one last chance, please help (WoT) GurrenLagann New to Emulation or Amiga scene 15 27 April 2008 12:14

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +2. The time now is 06:17.

Top

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.
Page generated in 0.24080 seconds with 13 queries