English Amiga Board


Go Back   English Amiga Board > Coders > Coders. General

 
 
Thread Tools
Old 28 June 2023, 12:07   #61
Thomas Richter
Registered User
 
Join Date: Jan 2019
Location: Germany
Posts: 3,288
Quote:
Originally Posted by meynaf View Post
No it is not a minor part of the deal. If you believe it is minor then you will neglect it, and we all know what happens in programs when something is overlooked.
What always happens. Of course no architecture ever survived contact with reality, but that's when I reiterate, think about what I missed about the problem, and adjust. That's the normal cycle. Every really good code is at least written twice. (-; Unless you have really a lot of experience, but that rarely happens.



Quote:
Originally Posted by meynaf View Post
No programming language is really practical. They all have their issues.
Some more, some less. But while I can express an architecture quite well in a higher language (more in C++, less in C), I cannot really express much in assembler. Assembler has no way of expressing datatypes, of expressing constraints and dependencies. It is just a pile of code. The assembler does not help me at all to get my job done.



Quote:
Originally Posted by meynaf View Post

Perhaps the problem comes from the fact you see programming as a job, not as an art.
"Programming" is a job. Designing an well-working well-sustainable architecture is an art.


Quote:
Originally Posted by meynaf View Post

I don't think the inhabitant will agree.
If software wasn't about code, used language would be totally unimportant.
Which again tells that you have never thought about the problem. Of course inhabitants will agree. To build a house, you need a plan. You need to discuss with inhabitants how large the living room is, to which side of the house is the garden, and how to reach the eating room from the kitchen. If you just start building, you get exactly what you get if you just start coding: A pile of crap. Inhabitants will not be happy at all with the result.


Surely, somebody finally has to make the plan reality, but that's a craft, not an art.


To explain you a little bit the difference in terms of software:


graphics.library: "a coder's job". A pile of crap code, hard to extend, hard to maintain, not scalable, not extensible. Bad abstractions, too tight to the hardware. That's asm and C mixed. People just threw function after function into graphics "oh, here's another piece we can make use of".
The net result are structures like "GfxAssociate" because its internal workings depend too much on all documented structures. A very bad interface.



intuition.library: "an architect's job". Good structures, decent isolation of code and interface, extensible. That's C only.
Thomas Richter is offline  
Old 28 June 2023, 12:25   #62
Samurai_Crow
Total Chaos forever!
 
Samurai_Crow's Avatar
 
Join Date: Aug 2007
Location: Waterville, MN, USA
Age: 49
Posts: 2,190
Quote:
Originally Posted by Thomas Richter View Post
To explain you a little bit the difference in terms of software:


graphics.library: "a coder's job". A pile of crap code, hard to extend, hard to maintain, not scalable, not extensible. Bad abstractions, too tight to the hardware. That's asm and C mixed. People just threw function after function into graphics "oh, here's another piece we can make use of".
The net result are structures like "GfxAssociate" because its internal workings depend too much on all documented structures. A very bad interface.



intuition.library: "an architect's job". Good structures, decent isolation of code and interface, extensible. That's C only.
WTH? They both need to be rewritten!

The AmigaOS is so decidedly obsolete now that the only purpose of Graphics.library is to be the glue between the OS and the chipset. One partucular optimization that 68k C compilers have yet to figure out is how to limit the register dumps to and from the stack at the beginning and end of every function respectively will affect only the registers actually used by the subroutine. PPC has some glue in its linker that allows only some of the registers to be stored. How many years until 68k gets this optimization? The world may never know.

Intuition.library has loads of legacy code in it that hasn't been utterly replaced by more modular code. GadTools.library can't function without Intuition.library but in order to migrate the code to have the least amount of legacy cruft involved, the roles need to be reversed to get a pure BOOPSI implementation such that Intuition can be deprecated and evicted from future Kickstarts. I doubt that it will ever happen now, but as long as we can dream....
Samurai_Crow is offline  
Old 28 June 2023, 12:25   #63
alexh
Thalion Webshrine
 
alexh's Avatar
 
Join Date: Jan 2004
Location: Oxford
Posts: 14,431
You can do some very legal things in C that look simple but are extremely expensive in terms of processing overhead, memory and stack usage.

Things that today are mainly irrelevant to most programmers because all three are in large supply on modern systems.

Some of todays embedded systems are very similar to home computers of the 80s and 90s in terms of their CPU power and RAM.

It's practically impossible to recruit good software engineers to work in embedded programming who actually understand the ramifications of the C code they are writing. Most have been poisoned by decades of writing without caring at that level of optimisation.

I'm the ASIC engineer (chip designer) but the number of times I have to say to the firmware engineers writing in C for my hardware "Do you actually understand what that is going to turn into?"

I try to hire old skool programmers for embedded when I can. But most are in their 50s and demand salaries outside my budget these days.
alexh is offline  
Old 28 June 2023, 12:50   #64
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,350
Quote:
Originally Posted by Thomas Richter View Post
What always happens. Of course no architecture ever survived contact with reality, but that's when I reiterate, think about what I missed about the problem, and adjust. That's the normal cycle. Every really good code is at least written twice. (-; Unless you have really a lot of experience, but that rarely happens.
Code is that "contact with reality". So you can't say it is unimportant.


Quote:
Originally Posted by Thomas Richter View Post
Some more, some less. But while I can express an architecture quite well in a higher language (more in C++, less in C), I cannot really express much in assembler. Assembler has no way of expressing datatypes, of expressing constraints and dependencies. It is just a pile of code. The assembler does not help me at all to get my job done.
What is that "express an architecture" already ? Oh yeah, a bunch of classes where you never know who does what and have 10 lines where 1 would suffice.
Sorry, i still prefer the assembler - especially after having disassembled what this produces.


Quote:
Originally Posted by Thomas Richter View Post
"Programming" is a job. Designing an well-working well-sustainable architecture is an art.
No, programming is an art too. Your mistake is precisely to see it as a job and relying on your tools to do at least part of that job at your place.


Quote:
Originally Posted by Thomas Richter View Post
Which again tells that you have never thought about the problem. Of course inhabitants will agree. To build a house, you need a plan. You need to discuss with inhabitants how large the living room is, to which side of the house is the garden, and how to reach the eating room from the kitchen. If you just start building, you get exactly what you get if you just start coding: A pile of crap. Inhabitants will not be happy at all with the result.
I haven't said design wasn't important, just that it wasn't more important than actual implementation. Don't make me write things i didn't.


Quote:
Originally Posted by Thomas Richter View Post
Surely, somebody finally has to make the plan reality, but that's a craft, not an art.
Does not mean it is not important. If that craft lacks, users won't be happy regardless of the quality of the plan.
In short : both are important.


Quote:
Originally Posted by Thomas Richter View Post
To explain you a little bit the difference in terms of software:

graphics.library: "a coder's job". A pile of crap code, hard to extend, hard to maintain, not scalable, not extensible. Bad abstractions, too tight to the hardware. That's asm and C mixed. People just threw function after function into graphics "oh, here's another piece we can make use of".
The net result are structures like "GfxAssociate" because its internal workings depend too much on all documented structures. A very bad interface.
At least graphics.library provides efficient routines.
With "architect's job" it would only have made the system too slow to be usable.


Quote:
Originally Posted by Thomas Richter View Post
intuition.library: "an architect's job". Good structures, decent isolation of code and interface, extensible. That's C only.
And so unwieldy to use for GUIs that it leaded to the creation of gadtools and numerous third-party software. With some parts looking totally useless (who uses boopsi for their gui, really ?).

Ah, and you forgot exec.library. Nice, easy to use, lean and mean. And 100% asm. Another "coder's job".
meynaf is offline  
Old 28 June 2023, 12:59   #65
ross
Defendit numerus
 
ross's Avatar
 
Join Date: Mar 2017
Location: Crossing the Rubicon
Age: 54
Posts: 4,483
Quote:
Originally Posted by alexh View Post
It's practically impossible to recruit good software engineers to work in embedded programming..
I totally agree with alexh here.

My working career started as a programmer of embedded systems and I understand him perfectly when he says that it is now basically impossible to find programmers with the right mindset.
It's one of the mistakes Thomas makes in his reasoning, it lacks contextualization.
There's no point in making plans for a skyscraper if you're building a sturdy cabin in the desert.
You always have to look at the cost/time/benefit ratio.
This doesn't mean making a crap project, but using the right tools in the right place.

However we are always talking about the Amiga and a hobby, it's okay to be a 'bad' software engineer / coder, the important thing is to have fun (no, there are NEVER too many 'coders').

And back to the original question: "Any reason to use assembly on Amiga rather than C?"
Yes, if you think it could be useful, if this optimize and speed up your code in legacy machines, if you like 68k assembler and it makes you have fun.
ross is offline  
Old 28 June 2023, 13:09   #66
Karlos
Alien Bleed
 
Karlos's Avatar
 
Join Date: Aug 2022
Location: UK
Posts: 4,347
Quote:
Originally Posted by ross View Post
And back to the original question: "Any reason to use assembly on Amiga rather than C?"
Yes, if you think it could be useful, if this optimize and speed up your code in legacy machines, if you like 68k assembler and it makes you have fun.
Amen.
Karlos is offline  
Old 28 June 2023, 13:18   #67
TCD
HOL/FTP busy bee
 
TCD's Avatar
 
Join Date: Sep 2006
Location: Germany
Age: 46
Posts: 31,855
Quote:
Originally Posted by ross View Post
This doesn't mean making a crap project, but using the right tools in the right place.
How dare you mention 'it depends'! People are so comfortably dug into their own holes here that this cold wind of reason will upset them
TCD is online now  
Old 28 June 2023, 15:32   #68
arcanist
Registered User
 
Join Date: Dec 2017
Location: Austin, TX
Age: 41
Posts: 410
Quote:
Originally Posted by alexh View Post
It's practically impossible to recruit good software engineers to work in embedded programming who actually understand the ramifications of the C code they are writing. Most have been poisoned by decades of writing without caring at that level of optimisation.
I'm not sure if my employer got lucky but we've had a few great embedded hires over the last couple of years.

Part of my job is maintaining the microcontrollers in GPUs. Until last year these were all programmed in assembly with a custom ISA. This year we replaced several of them with a new design targeting a standard ISA, so we could replace it all with C.

The new hires' job was to handle this migration. Despite the initial performance being quite dire they managed to turn it around with GCC-specific performance hints, restructuring code into something more directly in line with the desired assembly, and replacing some parts with inline assembly.

While I did enjoy being the assembly wizard I don't miss that codebase in the slightest.
arcanist is offline  
Old 28 June 2023, 16:07   #69
jotd
This cat is no more
 
jotd's Avatar
 
Join Date: Dec 2004
Location: FRANCE
Age: 52
Posts: 8,336
In a company project I had to look into, they had 99% Ada code and 1% asm code because Ada probably didn't handle shift bits too well.

So the time-critical code was asm, and the rest was Ada. You can't do otherwise in industrial projects. Not everyone can handle asm 8 hours a day
jotd is offline  
Old 28 June 2023, 16:33   #70
Debvgger
Registered User
 
Join Date: Jun 2020
Location: Another World
Posts: 28
Quote:
Originally Posted by Steam Ranger View Post
It seems that with modern optimising compilers and cross-compilation tools noone could write code for the Amiga better than gcc can, is there any reason to use assembly over C in 2023 for the Amiga (beyond novelty)?
The real question is if there's any reason to trust these promises that have been telling us that same history about doing a better job than any asm programmer since the 90s, but in practice they always fail in amazingly crappy ways. These compilers only "optimize" bad C code.

I teach code optimization in a university. I make my students disassemble a lot of GCC code. GCC 12 does a really crappy job on x64. Now just imagine what these compilers know about 68k.

68k asm is super easy and nearly feels high-level. So the question really is the opposite, why should you want to use C on such an amazing architecture that was meant to be coded in asm :-) A possible answer is to be able to port code from other architecture, but that's going to end up being slow.
Debvgger is offline  
Old 28 June 2023, 16:38   #71
jotd
This cat is no more
 
jotd's Avatar
 
Join Date: Dec 2004
Location: FRANCE
Age: 52
Posts: 8,336
I 100% agree with that last post.

But take for instance powerpc, asm is so crappy you'd think they've wowed not to use vowels for mnemonics.

That's when the bullshit about compilers doing a better job started, with the pipeline RISC shit.

Someone once told me that some compiler (intel icc) had a 1000% speed rate compared to gcc. It was indeed on shitty code. On my code, i only had marginal gains because I wasn't spending my time copy/pasting the same shit over and over It had SSE support so in the end I used it.

Some architectures are meant to be used with a compiler (PowerPC for instance, and compilers to a very good job with x86 and arm) but old architectures & high level languages don't mix well in terms of performance (8-bit C compilers generate a lot of code, and C is just not designed to run on 8-bit CPUs, 68000 also is really easy to code in asm, and C slightly confuses the code, specially low level)

C was meant for portability, because they didn't want to port Unix on each target. But it's only high-level assembly.
jotd is offline  
Old 28 June 2023, 16:41   #72
Thomas Richter
Registered User
 
Join Date: Jan 2019
Location: Germany
Posts: 3,288
Quote:
Originally Posted by meynaf View Post
Code is that "contact with reality". So you can't say it is unimportant.
It is the less important part of the job. It is of course not "unimportant", but if you start writing code without having an idea of what you are actually doing, you create nonsense.


Quote:
Originally Posted by meynaf View Post
What is that "express an architecture" already ? Oh yeah, a bunch of classes where you never know who does what and have 10 lines where 1 would suffice.
That does really not matter at all. "How many lines" is utterly irrelevant. Think about algorithms, not lines of code. If you can express the architecture in a good class hierarchy, that's the first step.



First make it work, then make it great, then make it fast.


In that order. The key to fast code is not assembly. The key to fast code is the right algorithm for the job. Once you have that, isolate bottlenecks, and *then* you can start thinking about rewriting those. Actually, 20 years ago we had our assembly expert in house who optimized the heavy-duty parts of the algorithm, but nowadays, that's another expert that uses compiler intrinsics to optimize the heck out of it.


Of course, I know how to prepare layout and the classes such that SIMD can be applied, but that's part of the architecture.


It's not so different for Amiga. Prepare structures such that you can isolate heavy-duty code, and then rewrite that part in assembly, but only that. P96 BlitBitMap() is a nice example of that - finding bitmap types, pixel types, locking hardware, identifying what the hardware can do, "high level code" and as such C. Blitter primitives for shuffling data around if the on-board blitter cannot do it: Assembler.


Writing everything in assembler would create almost zero benefit, except that the code would be not maintainable.


Quote:
Originally Posted by meynaf View Post
No, programming is an art too. Your mistake is precisely to see it as a job and relying on your tools to do at least part of that job at your place.
That is not a "mistake", that is how every job works. Knowing your tools. The compiler is a tool and it can be increadibly helpful for the job. It removes you from the burden to think about all the unnecessary data shuffling the compiler can do better anyhow.




Quote:
Originally Posted by meynaf View Post

I haven't said design wasn't important, just that it wasn't more important than actual implementation. Don't make me write things i didn't.
It is more important because the design makes the speed, not the code. Ok, there are trivial algorithms, but most of the stuff is not that trivial. Writing an array-bounds checker is trivial stuff, I don't need much of a design for it - actually, I would probably use what the STL has to offer, because somebody else already did it.




Quote:
Originally Posted by meynaf View Post


At least graphics.library provides efficient routines.
No, not for what it *should* actually do. They are only efficient for blitter hardware, planar bitplanes. But that's not what is actually needed. We have chunky, true-color, various F-Modes in the chipset... and for that, it does a pretty lousy job, and it is pretty hard to extend to make it work what it actually should do.


If graphics had been designed (and not coded) it would have been more efficient at today's problems. The way how it is is that is actually less efficient than it could be. Thus, for example, graphics has to go through hoops with its "GfxAssociate" mechanism to find side-information its originally "designed" structures cannot hold.




Quote:
Originally Posted by meynaf View Post



With "architect's job" it would only have made the system too slow to be usable.
Quite the reverse. It would be more efficient. Bitplanes would not be just "plain old data", but would be allocated and destructed as objects. They would hold whatever data they need to hold, without requiring to "associate" them with anything, and thus access to its data to the implementing firmware would be more immediate than right now. "Rastports" could hold vector fonts that could be more efficiently rendered than going through a stupid pre-rendering step as now. Datatypes could render true-color graphics without going through multiple conversion steps as graphics would accept the incoming data natively....


Graphics *could* be a lot faster if it would have had a better architecture. The entire "bitplane" system it is based upon is pretty much a speed-brake.



Quote:
Originally Posted by meynaf View Post



And so unwieldy to use for GUIs that it leaded to the creation of gadtools and numerous third-party software. With some parts looking totally useless (who uses boopsi for their gui, really ?).
Boopsis are an intuition abstraction that is, as such, quite ok conceptionally. There is one problem and that is "input.device side rendering" which blocks the mouse pointer and defeats smooth user experience. The design is ok, and that (some of them) are slow is just the matter of the complexity of the task. If you need to rescale all GUI elements of a window then that is going to take a while.



GadTools is just (as the name suggests) a toolbox for intuition primitives, based upon what intuition has in its core system.
Thomas Richter is offline  
Old 28 June 2023, 16:50   #73
malko
Ex nihilo nihil
 
malko's Avatar
 
Join Date: Oct 2017
Location: CH
Posts: 4,972
Quote:
Originally Posted by grond View Post
I wonder whether they have that amount of commentation per line of code they write.
!
The answer in in this sole thread.
Thomas does not comment much because C does not require much comments.
Meynaf does comment because he is used to this good practice.


Quote:
Originally Posted by ross View Post
"Code is like humor.
When you have to explain it, it's bad."
But what makes you laugh today may not in 3 years. Thus explanations may help especially if the humour has or may have different audiences

Quote:
Originally Posted by Thomas Richter View Post
[...] That's just a minor part of the deal. [...]
And with such thinking we finish with bullshit OS requiring a minimum of 8Gb of memory and multi core CPU to do what an OS of the 80' was able with 512Kb of memory and 7Mhz CPU (yes I intentionally mark red the line).
malko is offline  
Old 28 June 2023, 17:04   #74
Thomas Richter
Registered User
 
Join Date: Jan 2019
Location: Germany
Posts: 3,288
Quote:
Originally Posted by malko View Post
!
The answer in in this sole thread.
Thomas does not comment much because C does not require much comments.
Meynaf does comment because he is used to this good practice.
Actually, no. I tend to overcomment (ask my coworkers). However, "writing comments" is good, code "requiring comments" is bad. The trouble with comments is that sooner or later your code and your comments run out of sync.


Quote:
Originally Posted by malko View Post
And with such thinking we finish with bullshit OS requiring a minimum of 8Gb of memory and multi core CPU to do what an OS of the 80' was able with 512Kb of memory and 7Mhz CPU (yes I intentionally mark red the line).
That's not even nearly right. Your today's operating system has internet, true-color screens, 3D graphics, 3D multi-channel audio and is portable to different architectures. Your AmigaOs is frozen in time with planar graphics, 8 bitplanes max, stereo sound, no security whatsoever, and is bound to a particular hardware.


There are reasons for the complexity, and there is a price to pay for the flexibility.
Thomas Richter is offline  
Old 28 June 2023, 17:47   #75
malko
Ex nihilo nihil
 
malko's Avatar
 
Join Date: Oct 2017
Location: CH
Posts: 4,972
Quote:
Originally Posted by Thomas Richter View Post
[...] There are reasons for the complexity, [...]
Reason is called : business model.
Look how W11 has turned complex. An option that was easily accessible in W7 is now hidden under the submenu of the menu that is inside the submenu of the other menu.
Yeah, really productive. Same for the new right click to access the properties. Not to mention outlook (and teams) opening all links by default in edge instead of the default browser.
Complexity means most of the time : Pissing off users.

Quote:
Originally Posted by Thomas Richter View Post
[...] and there is a price to pay for the flexibility.
There is no price to pay for flexibility. Flexibility is the default rule to apply. Not something you have to pay for because you are pissed off by the complexity that has been build to make you open your wallet.
malko is offline  
Old 28 June 2023, 17:48   #76
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,350
Quote:
Originally Posted by Thomas Richter View Post
It is the less important part of the job. It is of course not "unimportant", but if you start writing code without having an idea of what you are actually doing, you create nonsense.
Again both are of equal importance. Indeed you shouldn't write code without knowing what you are doing. This goes both ways.

Your end users don't care how your program is written, how "maintainable" it is. But if it is crashy, is a resource hog and runs like snail, they will notice - no matter how beautiful its internal architecture is.


Quote:
Originally Posted by Thomas Richter View Post
That does really not matter at all. "How many lines" is utterly irrelevant. Think about algorithms, not lines of code. If you can express the architecture in a good class hierarchy, that's the first step.
Yep, the first step to bloatware.
How many lines is relevant, simply because the longer a program is, the more difficult it is to handle. Sorry, but i prefer deciphering 10 lines of asm rather than 100 lines of C/C++.


Quote:
Originally Posted by Thomas Richter View Post
First make it work, then make it great, then make it fast.
Which usually becomes : make it work, then you have the customer on the phone asking for new features and you have no time to make it great/fast, ending up never doing things right.


Quote:
Originally Posted by Thomas Richter View Post
In that order. The key to fast code is not assembly. The key to fast code is the right algorithm for the job. Once you have that, isolate bottlenecks, and *then* you can start thinking about rewriting those. Actually, 20 years ago we had our assembly expert in house who optimized the heavy-duty parts of the algorithm, but nowadays, that's another expert that uses compiler intrinsics to optimize the heck out of it.
Your "the" key implies there is a single one. But nope. There are two.
Yes the right algorithm is important, but if you use a slow language your program will be slow.
Real life example : my jpeg decoder. My "slow int" implementation is faster than compiled "fast int" method !
Besides, choosing the right algorithm isn't easy if you don't know what the underlying code does. The fastest one isn't necessarily obvious to pick up.
Good, compact data layout also helps a lot with speed, but with C you're just clueless.


Quote:
Originally Posted by Thomas Richter View Post
Of course, I know how to prepare layout and the classes such that SIMD can be applied, but that's part of the architecture.

It's not so different for Amiga. Prepare structures such that you can isolate heavy-duty code, and then rewrite that part in assembly, but only that. P96 BlitBitMap() is a nice example of that - finding bitmap types, pixel types, locking hardware, identifying what the hardware can do, "high level code" and as such C. Blitter primitives for shuffling data around if the on-board blitter cannot do it: Assembler.
Nope. Writing everything in asm means the overall architecture of your program is more efficient.
Sorry, but you will NEVER reach the speed of my image viewer (which is faster than everything else) by just optimising parts of the code. But you're fancy a code contest maybe ?


Quote:
Originally Posted by Thomas Richter View Post
Writing everything in assembler would create almost zero benefit, except that the code would be not maintainable.
Truly maintainable code does not exist. Go to github, pick up a random project, and tell me *that thing* is maintainable : just BS.


Quote:
Originally Posted by Thomas Richter View Post
That is not a "mistake", that is how every job works. Knowing your tools. The compiler is a tool and it can be increadibly helpful for the job. It removes you from the burden to think about all the unnecessary data shuffling the compiler can do better anyhow.
It is a mistake. Yes knowing your tools is important but the compiler is too complex for you to really know it !
And no, the compiler is very poor about data shuffling. They constantly move things around as parameter passing instead of simply keeping same thing in same register.


Quote:
Originally Posted by Thomas Richter View Post
It is more important because the design makes the speed, not the code. Ok, there are trivial algorithms, but most of the stuff is not that trivial. Writing an array-bounds checker is trivial stuff, I don't need much of a design for it - actually, I would probably use what the STL has to offer, because somebody else already did it.
The design is not enough to reach maximum speed. With a compiler you start with a big handicap that you can never overcome.
Most of the stuff in a programmer's life actually is trivial - if it is not, either you're using the wrong approach, or you ought to split the task into smaller parts.


Quote:
Originally Posted by Thomas Richter View Post
No, not for what it *should* actually do. They are only efficient for blitter hardware, planar bitplanes. But that's not what is actually needed. We have chunky, true-color, various F-Modes in the chipset... and for that, it does a pretty lousy job, and it is pretty hard to extend to make it work what it actually should do.

If graphics had been designed (and not coded) it would have been more efficient at today's problems. The way how it is is that is actually less efficient than it could be. Thus, for example, graphics has to go through hoops with its "GfxAssociate" mechanism to find side-information its originally "designed" structures cannot hold.
Graphics wasn't targeted at todays problems, obviously, because by definition they did not exist at the time.
If graphics.library can't be enhanced easily, why wanting to do this. Keep it "as is" for compatibility and do something else.


Quote:
Originally Posted by Thomas Richter View Post
Quite the reverse. It would be more efficient. Bitplanes would not be just "plain old data", but would be allocated and destructed as objects. They would hold whatever data they need to hold, without requiring to "associate" them with anything, and thus access to its data to the implementing firmware would be more immediate than right now. "Rastports" could hold vector fonts that could be more efficiently rendered than going through a stupid pre-rendering step as now. Datatypes could render true-color graphics without going through multiple conversion steps as graphics would accept the incoming data natively....
Definitely not true for the original tasks it had to do.
Remember it is designed for running on a 7Mhz machine not doing whatever you may call "modern".


Quote:
Originally Posted by Thomas Richter View Post
Graphics *could* be a lot faster if it would have had a better architecture. The entire "bitplane" system it is based upon is pretty much a speed-brake.
Now we are speaking about something completely different. There were hardware reasons why bitplanes got used. And good reasons, perhaps not now, but certainly at the time graphics was written.


Quote:
Originally Posted by Thomas Richter View Post
Boopsis are an intuition abstraction that is, as such, quite ok conceptionally.
But it is, for all practical purposes, utterly useless.


Quote:
Originally Posted by Thomas Richter View Post
There is one problem and that is "input.device side rendering" which blocks the mouse pointer and defeats smooth user experience. The design is ok, and that (some of them) are slow is just the matter of the complexity of the task. If you need to rescale all GUI elements of a window then that is going to take a while.
I've never had any problem with input.device doing it. Actually, it prevents whole GUI be frozen if the caller task is busy - which is a magnitude better than what we get in Windows !
But that wasn't the point anyway. While some design decisions were good, others were not.


Quote:
Originally Posted by Thomas Richter View Post
GadTools is just (as the name suggests) a toolbox for intuition primitives, based upon what intuition has in its core system.
But the point is that Intuition was not giving that feature level to start with, whereas it should have.


Quote:
Originally Posted by Thomas Richter View Post
There are reasons for the complexity, and there is a price to pay for the flexibility.
These reasons are not what you think they are, and the price is clearly way too high for the result we get.
meynaf is offline  
Old 28 June 2023, 18:02   #77
ross
Defendit numerus
 
ross's Avatar
 
Join Date: Mar 2017
Location: Crossing the Rubicon
Age: 54
Posts: 4,483
Quote:
Originally Posted by malko View Post
But what makes you laugh today may not in 3 years. Thus explanations may help especially if the humour has or may have different audiences
Surely you are right.

It remains to be seen whether the quoted message was humorous or not

Last edited by ross; 28 June 2023 at 20:32. Reason: The original message was timed, self-destruct has been activated ;)
ross is offline  
Old 28 June 2023, 18:34   #78
Thomas Richter
Registered User
 
Join Date: Jan 2019
Location: Germany
Posts: 3,288
Quote:
Originally Posted by meynaf View Post
Yep, the first step to bloatware.
How many lines is relevant, simply because the longer a program is, the more difficult it is to handle. Sorry, but i prefer deciphering 10 lines of asm rather than 100 lines of C/C++.
Except that the ratio is typically the other way around.


Quote:
Originally Posted by meynaf View Post


Your "the" key implies there is a single one. But nope. There are two.
Yes the right algorithm is important, but if you use a slow language your program will be slow.
Real life example : my jpeg decoder. My "slow int" implementation is faster than compiled "fast int" method !
And probably it does not meet its demands or the required error bounds. Did you do testing with the error bounds required for JPEG? Hint: the "-fast" algorithm of the IJG code does not. Does it actually implement JPEG? I guess not, you surely have forgotten a lot of what JPEG can do. What you have probably implemented is the 8-bit Huffman process. Probably limited to MCU sizes of 1 and 2.



Now, as an exercise, take your existing code, and consider how much work would be needed to implement everything else in JPEG, and how much code you would need to throw away. Consider how much simpler the task would have been if you would have started from a code base that actually is "architected" instead of "coded".


So maybe take this as a take-home exercise: With your JPEG code, go ahead and just assume for the time being it would be professional code that would be sold and paid for, for your daily income. Consider a customer would demand "I want lossless JPEG and 12-bit QM-coded JPEG processes". How hard would it be to support that from your code base, how much would you need to throw away, and how much you could reuse.


That is *exactly* what software architecture is about. (I assume you probably had just taken Tom Lane's original IJG code and patched it up a little bit instead of implementing all of it from scratch, right? That's again the lazy way of doing. Tom's code does not have a particularly fancy architecture).



Quote:
Originally Posted by meynaf View Post

Sorry, but you will NEVER reach the speed of my image viewer (which is faster than everything else) by just optimising parts of the code. But you're fancy a code contest maybe ?
The speed of the viewer does not matter if it does not do what I need. I do not care too much about the speed of code. I care about correctness and robustness.


Quote:
Originally Posted by meynaf View Post

It is a mistake. Yes knowing your tools is important but the compiler is too complex for you to really know it !
For you, maybe. (-;


Quote:
Originally Posted by meynaf View Post
And no, the compiler is very poor about data shuffling. They constantly move things around as parameter passing instead of simply keeping same thing in same register.
Why does that matter? It is quite irrelevant - real live code has bottlenecks. There, I do care. Most of the code is rarely run, so I care about maintenance and not how registers are moved.


Quote:
Originally Posted by meynaf View Post

The design is not enough to reach maximum speed. With a compiler you start with a big handicap that you can never overcome.
Quite the reverse, with assembler you don't see the wood, you see the trees. It is easy to care about trees with low level language, but its hard to care about the wood. For that reason: Profile code, identify bottlenecks, then care. Care about the few trees where it matters, but use proper tools for the wood.



Quote:
Originally Posted by meynaf View Post


Most of the stuff in a programmer's life actually is trivial - if it is not, either you're using the wrong approach, or you ought to split the task into smaller parts.
Most of the *programmers lfe* maybe, but I'm not talking about programming.


Quote:
Originally Posted by meynaf View Post



Graphics wasn't targeted at todays problems, obviously, because by definition they did not exist at the time.
If graphics.library can't be enhanced easily, why wanting to do this. Keep it "as is" for compatibility and do something else.
And here, we are at the very central problem you do not understand. Software is a living object. Such as graphics. It is not suitable for the demans of today. However, if it had been designed (instead of just "coded"), it would have been relatively straightforeward to bring it up to the demands of today, actually without impacting software depending on graphics (which is a lot).


Why do I want to do this? Because of customer demand, that's why. People wanted true color displays, people wanted more than 8-bitplane AGA, people wanted high-resolution displays. Graphics was and still is *in the way* to such demands, and it requires a lot more code to work around these problems than it would have needed to actually architect graphics correctly to begin with.



It is the perfect example that "software is a moving target", and you better have an idea about that when you start such that you can adapt it to this target. Graphics wasn't, it is a pile of code, nothing more.


Quote:
Originally Posted by meynaf View Post



Definitely not true for the original tasks it had to do.
But look, this is the *central thing*. "Original tasks" don't stay, they never stay. That is the entire problem we are talking about.


Quote:
Originally Posted by meynaf View Post




Remember it is designed for running on a 7Mhz machine not doing whatever you may call "modern".
But, you see, that is *exactly* the problem. Intuition could be extended relatively easy to more complex GUI elements with its boopsis, it had been upscaled, even beyond the original possibilities of the original system. But graphics is *stuck*, exactly *because* of the problem we are talking about the entire time.


Quote:
Originally Posted by meynaf View Post





Now we are speaking about something completely different. There were hardware reasons why bitplanes got used. And good reasons, perhaps not now, but certainly at the time graphics was written.
Precisely, *at the time it was written*. However, it was written such that it could not handle anything else, and that is exactly the mistake. The mistake was that it used public structures as interfaces that are bound to the hardware graphics organization instead of opaque objects and functions ("methods") operating on it, or instead of thinking in terms of objects on a screen (which is the *real* problem graphics needs to solve, and not about whether there are bitplanes or not).



Quote:
Originally Posted by meynaf View Post







I've never had any problem with input.device doing it. Actually, it prevents whole GUI be frozen if the caller task is busy - which is a magnitude better than what we get in Windows !
The problem is that the mouse pointer is stuck while boopsis update. That's a bad share of work. The system becomes irresponsive while updating, and it is not how the rest of intuition operates, namely asynchroniously. Forward the job to the program that maintains the gui, but keep the user the responsiveness. It's a bad design choice. It's a bit too late to fix that, unfortunately.



Quote:
Originally Posted by meynaf View Post
But the point is that Intuition was not giving that feature level to start with, whereas it should have.
That is *not* the point. You cannot foresee what features you may or may not need in the future. Nobody can. But you can keep your architecture flexible enough to allow easy adjusting without breaking interfaces.



Intuition could be. Whether a gadget is a struct gadget or a boopsi does not matter to intuition, neither to the program does not matter. Both work likewise.


To graphics, a "struct bitmap" is always planar, and a "struct ViewPort" is always a viewport. You probably have no idea through which hoops graphics and P96 have to jump to retarget the graphics to non-native bitplanes - or even to AGA with multiple "monitor" definitions.


To make this more concrete: If graphics had an "AllocBitMap()" function to begin with, an "AllocViewPort()" and an "AllocView()" and if its makers had never documented how these structures actually look like, and instead would have written manipulator functions for them (aka "methods"), graphics would be in a much better shape, and, as of today, actually faster than it is (yes, really, including on native planar hardware).


I *do not need* C++ to write such code, but yes, working in its mindset helps, and having a compiler to support me in that also helps. If that is too slow, I can still care about that later, this is really a minor issue. You do not learn why such abstractions are helpful by "coding asm code".



Quote:
Originally Posted by meynaf View Post
These reasons are not what you think they are, and the price is clearly way too high for the result we get.
Too high for whom? Actually, the market success of Windows or Mac say the opposide.
Thomas Richter is offline  
Old 28 June 2023, 19:24   #79
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,350
Quote:
Originally Posted by Thomas Richter View Post
Except that the ratio is typically the other way around.
When you compile code, yes. When you write it in asm, not always. Depends how good C source was (it usually is quite bad). With C++ and using classes, almost guaranteed to be smaller in asm.


Quote:
Originally Posted by Thomas Richter View Post
And probably it does not meet its demands or the required error bounds.
Of course it does. Why wouldn't it ?


Quote:
Originally Posted by Thomas Richter View Post
Did you do testing with the error bounds required for JPEG?
It is rock solid. I probably tested even slightly more cases of broken data than required to.


Quote:
Originally Posted by Thomas Richter View Post
Hint: the "-fast" algorithm of the IJG code does not.
Not a problem, I'm not using it.


Quote:
Originally Posted by Thomas Richter View Post
Does it actually implement JPEG? I guess not, you surely have forgotten a lot of what JPEG can do.
A lot of what JPEG can do on the paper, never actually did.


Quote:
Originally Posted by Thomas Richter View Post
What you have probably implemented is the 8-bit Huffman process. Probably limited to MCU sizes of 1 and 2.
It does what the IJG code did, no more no less. This is sufficient for everyday use with more than 99.999% of existing images.


Quote:
Originally Posted by Thomas Richter View Post
Now, as an exercise, take your existing code, and consider how much work would be needed to implement everything else in JPEG, and how much code you would need to throw away. Consider how much simpler the task would have been if you would have started from a code base that actually is "architected" instead of "coded".
I wouldn't need to throw anything away, just replace error cases by code doing the work.


Quote:
Originally Posted by Thomas Richter View Post
So maybe take this as a take-home exercise: With your JPEG code, go ahead and just assume for the time being it would be professional code that would be sold and paid for, for your daily income. Consider a customer would demand "I want lossless JPEG and 12-bit QM-coded JPEG processes". How hard would it be to support that from your code base, how much would you need to throw away, and how much you could reuse.
Same answer as above.


Quote:
Originally Posted by Thomas Richter View Post
That is *exactly* what software architecture is about. (I assume you probably had just taken Tom Lane's original IJG code and patched it up a little bit instead of implementing all of it from scratch, right? That's again the lazy way of doing. Tom's code does not have a particularly fancy architecture).
There is nothing left of original code, except i may be using same algorithms.


Quote:
Originally Posted by Thomas Richter View Post
The speed of the viewer does not matter if it does not do what I need. I do not care too much about the speed of code. I care about correctness and robustness.
It does what other image viewers do, just faster. It is also very secure and does not crash even with damaged files.
If it does not do what you need, no other program will.


Quote:
Originally Posted by Thomas Richter View Post
For you, maybe. (-;
So you've checked your compiler's source code and algorithms to see how it does things ?
You're becoming worse every time.


Quote:
Originally Posted by Thomas Richter View Post
Why does that matter? It is quite irrelevant - real live code has bottlenecks. There, I do care. Most of the code is rarely run, so I care about maintenance and not how registers are moved.
It does matter, simply because it's one of the numerous inefficiencies of compiled code.


Quote:
Originally Posted by Thomas Richter View Post
Quite the reverse, with assembler you don't see the wood, you see the trees. It is easy to care about trees with low level language, but its hard to care about the wood. For that reason: Profile code, identify bottlenecks, then care. Care about the few trees where it matters, but use proper tools for the wood.
Nonsense. You don't need to profile asm code, it's fast to start with.


Quote:
Originally Posted by Thomas Richter View Post
Most of the *programmers lfe* maybe, but I'm not talking about programming.
If you're not talking about programming here on this thread, you're OT.


Quote:
Originally Posted by Thomas Richter View Post
And here, we are at the very central problem you do not understand. Software is a living object. Such as graphics. It is not suitable for the demans of today. However, if it had been designed (instead of just "coded"), it would have been relatively straightforeward to bring it up to the demands of today, actually without impacting software depending on graphics (which is a lot).

Why do I want to do this? Because of customer demand, that's why. People wanted true color displays, people wanted more than 8-bitplane AGA, people wanted high-resolution displays. Graphics was and still is *in the way* to such demands, and it requires a lot more code to work around these problems than it would have needed to actually architect graphics correctly to begin with.

It is the perfect example that "software is a moving target", and you better have an idea about that when you start such that you can adapt it to this target. Graphics wasn't, it is a pile of code, nothing more.
Perhaps you just didn't know/understand how to expand graphics library correctly.


Quote:
Originally Posted by Thomas Richter View Post
But look, this is the *central thing*. "Original tasks" don't stay, they never stay. That is the entire problem we are talking about.
But you have zero way to ensure your design will be adequate for new needs.
As you said yourself above, architectures don't resist the contact with reality.


Quote:
Originally Posted by Thomas Richter View Post
But, you see, that is *exactly* the problem. Intuition could be extended relatively easy to more complex GUI elements with its boopsis, it had been upscaled, even beyond the original possibilities of the original system. But graphics is *stuck*, exactly *because* of the problem we are talking about the entire time.
Perhaps graphics isn't stuck and you just couldn't think of the right way to handle it.


Quote:
Originally Posted by Thomas Richter View Post
Precisely, *at the time it was written*. However, it was written such that it could not handle anything else, and that is exactly the mistake. The mistake was that it used public structures as interfaces that are bound to the hardware graphics organization instead of opaque objects and functions ("methods") operating on it, or instead of thinking in terms of objects on a screen (which is the *real* problem graphics needs to solve, and not about whether there are bitplanes or not).
It handles what it was designed to handle. That's already good.


Quote:
Originally Posted by Thomas Richter View Post
The problem is that the mouse pointer is stuck while boopsis update. That's a bad share of work. The system becomes irresponsive while updating, and it is not how the rest of intuition operates, namely asynchroniously. Forward the job to the program that maintains the gui, but keep the user the responsiveness. It's a bad design choice. It's a bit too late to fix that, unfortunately.
I've never seen a stuck mouse pointer while a window updates...


Quote:
Originally Posted by Thomas Richter View Post
That is *not* the point. You cannot foresee what features you may or may not need in the future. Nobody can. But you can keep your architecture flexible enough to allow easy adjusting without breaking interfaces.
So what, you'll add extra void* parameter to all your functions just in case ?


Quote:
Originally Posted by Thomas Richter View Post
Intuition could be. Whether a gadget is a struct gadget or a boopsi does not matter to intuition, neither to the program does not matter. Both work likewise.
Does not change the fact Intuition only provides a small, insufficient set of widgets.


Quote:
Originally Posted by Thomas Richter View Post
To graphics, a "struct bitmap" is always planar, and a "struct ViewPort" is always a viewport. You probably have no idea through which hoops graphics and P96 have to jump to retarget the graphics to non-native bitplanes - or even to AGA with multiple "monitor" definitions.

To make this more concrete: If graphics had an "AllocBitMap()" function to begin with, an "AllocViewPort()" and an "AllocView()" and if its makers had never documented how these structures actually look like, and instead would have written manipulator functions for them (aka "methods"), graphics would be in a much better shape, and, as of today, actually faster than it is (yes, really, including on native planar hardware).
If something's poorly designed, it does not matter in which language it has been written.


Quote:
Originally Posted by Thomas Richter View Post
I *do not need* C++ to write such code, but yes, working in its mindset helps, and having a compiler to support me in that also helps. If that is too slow, I can still care about that later, this is really a minor issue. You do not learn why such abstractions are helpful by "coding asm code".
You can abstract things in asm, too.


Quote:
Originally Posted by Thomas Richter View Post
Too high for whom? Actually, the market success of Windows or Mac say the opposide.
Are you aware that their success comes from marketing rather than anything else ?
meynaf is offline  
Old 28 June 2023, 20:18   #80
Ernst Blofeld
<optimized out>
 
Ernst Blofeld's Avatar
 
Join Date: Sep 2020
Location: <optimized out>
Posts: 321
This is pathetic. Why are the EAB moderators allowing this to go on?
Ernst Blofeld is offline  
 


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools

Similar Threads
Thread Thread Starter Forum Replies Last Post
Chat GPT Amiga Assembly rcman Coders. Asm / Hardware 3 26 March 2023 20:24
An Amiga coder was banned without a reason - is it ok? litwr project.EAB 1 18 June 2021 20:38
Beginning Amiga Assembly Tutorial(s) Curbie Coders. Asm / Hardware 15 29 May 2020 00:21
Beginning Amiga Assembly Programming Hewitson Coders. Tutorials 32 09 October 2012 18:25
Amiga Assembly sources - Please help! W4r3DeV1L Amiga scene 21 16 July 2008 08:13

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +2. The time now is 15:19.

Top

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.
Page generated in 0.14376 seconds with 14 queries