English Amiga Board Amiga Lore


Go Back   English Amiga Board > Coders > Coders. Asm / Hardware

 
 
Thread Tools
Old 17 February 2017, 16:12   #121
buggs
Registered User

 
Join Date: May 2016
Location: Rostock/Germany
Posts: 47
Quote:
Originally Posted by meynaf View Post
Ok, as i wish to recruit, i must at least give something to do.
Code:
; a0=source, a1-a4=dest
 move.w #1999,d0
.loop
 movem.l (a0)+,d1-d4
 move.l d1,d5
 swap d5
 move.w d3,d5
 move.l d5,(a2)+
 move.l d1,d5
 swap d3
 move.w d3,d5
 move.l d5,(a1)+
 move.l d2,d5
 swap d5
 move.w d4,d5
 move.l d5,(a4)+
 move.l d2,d5
 swap d4
 move.w d4,d5
 move.l d5,(a3)+
 dbf d0,.loop
 rts
<snip> Or do it for ARM or whatever cpu. <snip>
With VASM out, I figured it might be time to post something for "whatever CPU".
Code:
 move #999,d0
.loop
 load (a0)+,E0          ;a0 b0 c0 d0 (.w)
 load (a0)+,E1          ;a1 b1 c1 d1
 load (a0)+,E2          ;a2 b2 c2 d2
 load (a0)+,E3          ;a3 b3 c3 d3
 transhi E0-E3,E4:E5    ;E4: a0 a1 a2 a3 E5: b0 b1 b2 b3
 translo E0-E3,E6:E7    ;E6: c0 c1 c2 c3 E7: d0 d1 d2 d3
                     ;TRANS has latency, 1 cyc lost in this example
 store E4,(a1)+         ;
 store E5,(a2)+         ;
 store E6,(a3)+         ;
 store E7,(a4)+         ;inner loop assembles to 10 * 32 Bit
 dbf   d0,.loop          ;plus move, dbf = 12 * 32 Bit
Code as shown will process 32 bytes per run in 11 cycles. Obviously, it won't be of much use in the original scenario as data keeps piling up in the write buffers (as long as A1-A4 are in Chip). But it'll perform quite nicely when A1-A4 point to a fast memory location.
buggs is offline  
AdSense AdSense  
Old 17 February 2017, 16:29   #122
meynaf
68k wisdom
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon (France)
Age: 44
Posts: 2,134
Unfortunately for you this thread is about code density. And your example shows no benefit at all in this area.
Thanks anyway for giving me a nice proof ammx is useless
meynaf is offline  
Old 17 February 2017, 17:55   #123
matthey
Registered User
 
Join Date: Jan 2010
Location: Kansas
Posts: 951
Quote:
Originally Posted by meynaf View Post
Unfortunately for you this thread is about code density. And your example shows no benefit at all in this area.
Thanks anyway for giving me a nice proof ammx is useless
An SIMD can sometimes improve real world (realistic) code density. While the SIMD code shown is likely larger than the non-SIMD version, the SIMD code is likely smaller than unrolled non-SIMD code. An SIMD is generally only used in processor intensive code where the alternative is to unroll the loop. Code density is most important where performance is not sacrificed (I pushed for the Apollo Core to reduce loop overhead like the 68060, reduce BSR/JSR call overhead, improve bitfield performance so they could be used more, etc.). I believe the performance advantages of code density have been overlooked for many years. From examining code density performance of RISC code compressions, it is likely that the 68k architecture has an innate 20%+ performance advantage (likely substantially more with small ICache sizes and slower memory) over most uncompressed RISC architectures due to reduced ICache misses and improved fetch bandwidth. The advantages are likely greatly reduced in an FPGA where the CPU clock speed is low compared to the memory bandwidth and huge caches are added for a good code density architecture. The Apollo Core is created as a maximum performance non-upgradeable FPGA processor though. This is evident by the non-upgradeable SIMD sharing registers with the integer unit. Adding single precision float would give fp in the integer register file and wouldn't it be pretty to extend those integer registers to 128 bits? Also, was encoding room left to add 128 bit SIMD instructions or will the instructions grow like the x86_64 as encoding space runs low because of antiquated MMX encodings for a 64 bit SIMD?
matthey is offline  
Old 17 February 2017, 18:27   #124
buggs
Registered User

 
Join Date: May 2016
Location: Rostock/Germany
Posts: 47
Quote:
Originally Posted by meynaf View Post
Unfortunately for you this thread is about code density. And your example shows no benefit at all in this area.
Thanks anyway for giving me a nice proof ammx is useless
Let me see. I use 10 instructions excluding dbf (your example was 17), yet move twice the amount of data in the process. Useless, indeed.

Talking of useless. If it's just about code size for a 32 bit transpose, be my guest:
Code:
 move #1999,d4
 moveq #-16,d5
.loop
 move.l (a0)+,d0        ;a0 b0 (.w)
 move.l (a0)+,d2        ;c0 d0
 move.l (a0)+,d1        ;a1 b1
 move.l (a0)+,d3        ;c1 d1
 translo D0-D3,D0:D1    ;D0: a0 a1 c0 c1 D1: b0 b1 d0 d1
 storem d0,d5,(a1)      ;Bits 7-4 in D5 = 0xf0 -> write upper 32 bit only
 move.l d0,(a3)+
 addq.l #4,a1
 storem d1,d5,(a2)
 move.l d1,(a4)+
 addq.l #4,a2
 dbf d4,.loop
11 Instructions, 14 words (+1 moveq out of loop, +2 move +2 dbf). Yours was 18 words (+2+2).

32 Bit loads/stores. Whatever CPU. Faster. Smaller.

Last edited by buggs; 17 February 2017 at 18:28. Reason: commented storem
buggs is offline  
Old 17 February 2017, 18:59   #125
buggs
Registered User

 
Join Date: May 2016
Location: Rostock/Germany
Posts: 47
Quote:
Originally Posted by matthey View Post
<snip> (I pushed for the Apollo Core to reduce loop overhead like the 68060, reduce BSR/JSR call overhead, improve bitfield performance so they could be used more, etc.). I believe the performance advantages of code density have been overlooked for many years.
Hi Matt. None of that came out unheard. Dbf runs in 1/2 cycle on the second pipe and doesn't mis-predict. Subroutine overhead can be lowered when using the new address registers (scratch) and AMMX (also scratch).

Quote:
Originally Posted by matthey View Post
The Apollo Core is created as a maximum performance non-upgradeable FPGA processor though. This is evident by the non-upgradeable SIMD sharing registers with the integer unit. Adding single precision float would give fp in the integer register file and wouldn't it be pretty to extend those integer registers to 128 bits? Also, was encoding room left to add 128 bit SIMD instructions or will the instructions grow like the x86_64 as encoding space runs low because of antiquated MMX encodings for a 64 bit SIMD?
I'd liked to have 128 Bits, too. But that would have had an impact on achievable clock speeds. Should a future platform offer more resources, then there's plenty of encoding space left such that we wouldn't have to go wider than we've got right now. I agree on the FP topic.

In other matters: I wasn't able to answer you in the other thread anymore but the code was changed in a similar way to Don Adan's suggestion and your name appears in the changelog alongside him.
buggs is offline  
Old 17 February 2017, 19:36   #126
meynaf
68k wisdom
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon (France)
Age: 44
Posts: 2,134
Quote:
Originally Posted by buggs View Post
Let me see. I use 10 instructions excluding dbf (your example was 17), yet move twice the amount of data in the process. Useless, indeed.
Totally useless, yes. It's larger and not even faster. You just spend more time waiting for memory.


Quote:
Originally Posted by buggs View Post
Talking of useless. If it's just about code size for a 32 bit transpose, be my guest:
Code:
 move #1999,d4
 moveq #-16,d5
.loop
 move.l (a0)+,d0        ;a0 b0 (.w)
 move.l (a0)+,d2        ;c0 d0
 move.l (a0)+,d1        ;a1 b1
 move.l (a0)+,d3        ;c1 d1
 translo D0-D3,D0:D1    ;D0: a0 a1 c0 c1 D1: b0 b1 d0 d1
 storem d0,d5,(a1)      ;Bits 7-4 in D5 = 0xf0 -> write upper 32 bit only
 move.l d0,(a3)+
 addq.l #4,a1
 storem d1,d5,(a2)
 move.l d1,(a4)+
 addq.l #4,a2
 dbf d4,.loop
11 Instructions, 14 words (+1 moveq out of loop, +2 move +2 dbf). Yours was 18 words (+2+2).
Compare with Thorham's version or Phx's version which were 40 bytes overall and you'll see that it was clearly not worth. I count 38 for yours, and that's provided none of the added instructions are extra-large.
How many instructions added for just one word gained here and lost several times elsewhere (see below) ?


Quote:
Originally Posted by buggs View Post
32 Bit loads/stores. Whatever CPU.
But "translo" isn't documented anywhere i know of so i have to trust you ?


Quote:
Originally Posted by buggs View Post
Faster.
Nope. I told you why.


Quote:
Originally Posted by buggs View Post
Smaller.
Maybe, but only because i haven't required that registers must be saved (for this routine they normally are, it executes in vbl). Looking at this routine you can be shorter, but looking at the whole program you are not.

And ammx isn't needed anyway to make it that short, as normal instructions, if extended, could do the same and better. A simple exg does the trick and doesn't need adding a whole bunch of registers :
Code:
 move.w #1999,d0
.loop
 movem.l (a0)+,d1-d4
 swap d3
 exg.w d1,d3
 move.l d1,(a1)+
 swap d3
 move.l d3,(a2)+
 swap d4
 exg.w d2,d4
 move.l d2,(a3)+
 swap d4
 move.l d4,(a4)+
 dbf d0,.loop
Not only shorter than yours (11 instructions in loop, 32 bytes total), but also a lot more readable. And, of course, not using any extra register.

By the way, if 68080 is able to merge word writes to chipmem, then the 16-bit version can be used and the loop becomes just 8 bytes.
meynaf is offline  
Old 17 February 2017, 21:50   #127
buggs
Registered User

 
Join Date: May 2016
Location: Rostock/Germany
Posts: 47
Quote:
Originally Posted by meynaf View Post
Totally useless, yes. It's larger and not even faster. You just spend more time waiting for memory.
That such optimization is redundant with chipram writes was clearly stated in my initial post. Remember? "Whatever CPU". This "Whatever CPU" does actually have FastRam, along with planar video modes out of it. None of the other code posted in this thread has a comparable ratio between code size and amount of data processed (see MattHey).

Quote:
Originally Posted by meynaf View Post
Compare with Thorham's version or Phx's version which were 40 bytes overall and you'll see that it was clearly not worth. I count 38 for yours, and that's provided none of the added instructions are extra-large.
How many instructions added for just one word gained here and lost several times elsewhere (see below) ?
You've set ground rules and reference implementation. I followed them to the letter and provided an example how else one could implement the given procedure. I actually didn't mean to undercut Phx (I still owe him).
Thanks to him everyone can verify what I stated. Just add "machine ac68080" on top of the code and run through VASM. Wholly transparent.

Quote:
Originally Posted by meynaf View Post
But "translo" isn't documented anywhere i know of so i have to trust you ?
Do I look like needing _you_ to trust anything? Hilarious, coming from a dude who's in the habit of silently editing posts. Shall I repost?

I've provided verifyable code, phx provided the tool to assemble it.

You've actually seen code making use of it but all you bragged about was a Nop I left as a joke there.

Quote:
Originally Posted by meynaf View Post
Maybe, but only because i haven't required that registers must be saved (for this routine they normally are, it executes in vbl). Looking at this routine you can be shorter, but looking at the whole program you are not.
Shifting targets again, are we? Nowhere such condition was stated before.

Quote:
And ammx isn't needed anyway to make it that short, as normal instructions, if extended, could do the same and better. A simple exg does the trick and doesn't need adding a whole bunch of registers :
Code:
 move.w #1999,d0
.loop
 movem.l (a0)+,d1-d4
 swap d3
 exg.w d1,d3
 move.l d1,(a1)+
 swap d3
 move.l d3,(a2)+
 swap d4
 exg.w d2,d4
 move.l d2,(a3)+
 swap d4
 move.l d4,(a4)+
 dbf d0,.loop
Not only shorter than yours (11 instructions in loop, 32 bytes total), but also a lot more readable. And, of course, not using any extra register.
By the way, if 68080 is able to merge word writes to chipmem, then the 16-bit version can be used and the loop becomes just 8 bytes.
TRANSHi/Lo were implemented for 64 Bit transpose in a 64 Bit ISA to augment row/column algorithms. Just because they apply to your call for proposals doesn't mean that they won't have a broader scope.

Besides, didn't "Whatever CPU" mean "existing device" ?
Code:
vasm 1.7h (c) in 2002-2017 Volker Barthelmann
vasm M68k/CPU32/ColdFire cpu backend 2.2 (c) 2002-2016 Frank Wille
vasm motorola syntax module 3.9d (c) 2002-2016 Frank Wille
vasm test output module 1.0 (c) 2002 Volker Barthelmann

fatal error 2035 in line 59 of "bla.s": illegal opcode extension
> exg.w    d0,d1
aborting...
And I thought your UAE benchmark was lame already.
buggs is offline  
Old 17 February 2017, 23:16   #128
matthey
Registered User
 
Join Date: Jan 2010
Location: Kansas
Posts: 951
Quote:
Originally Posted by buggs View Post
Hi Matt. None of that came out unheard. Dbf runs in 1/2 cycle on the second pipe and doesn't mis-predict. Subroutine overhead can be lowered when using the new address registers (scratch) and AMMX (also scratch).
The 68060 was probably doing some kind of optimization with the 2nd integer pipe also because sometimes there was barely any overhead to short loops and sometimes short loops needed unrolling. The 68060 did have less loop overhead than its 68k predecessors at all times which is great but no overhead loops is the holy grail for improving code density (so much waste). I'm surprised a loop buffer can not unroll loops and remove most of the penalty especially on the last predicted loop iteration. DBF needs to predict the loop fall through or there is really no advantage to it over separate decrement and branch instructions, as I told Gunnar.

It is nice to have more integer scratch registers but it is not worth having a messy and confusing ISA, IMO. The 68k has already confused enough programmers (especially compiler writers) with the An/Dn split and lack of orthogonality which offsets many of the gains in code density and substantial reduction in cache/memory accesses.

Quote:
Originally Posted by buggs View Post
I'd liked to have 128 Bits, too. But that would have had an impact on achievable clock speeds. Should a future platform offer more resources, then there's plenty of encoding space left such that we wouldn't have to go wider than we've got right now. I agree on the FP topic.
It would be possible to reuse the encoding space if backward compatibility was not necessary and the SIMD was optional in the ISA. I guess that brings me to a question which you asked me once.

Quote:
Originally Posted by buggs View Post
Otherwise, thank you for the update regarding the ISA. Which brings me to one more thing: How can one reliably detect a CPU with the feature set in question (at different core development levels)? Do we have to probe instruction by instruction or is there already something like "CPUID"?
http://eab.abime.net/showpost.php?p=1104088&postcount=2

I suppose the chances of the SIMD being optional and there being some kind of CPUID support aren't very good .

Quote:
Originally Posted by buggs View Post
In other matters: I wasn't able to answer you in the other thread anymore but the code was changed in a similar way to Don Adan's suggestion and your name appears in the changelog alongside him.
Cool, Don's idea was workable .

Quote:
Originally Posted by meynaf View Post
By the way, if 68080 is able to merge word writes to chipmem, then the 16-bit version can be used and the loop becomes just 8 bytes.
Your code density code writing to chipmem was really about the worst possible "contest" example. Way too specific and useless on any semi-modern CPU or hardware. It's not even worth arguing about .
matthey is offline  
Old 18 February 2017, 00:39   #129
buggs
Registered User

 
Join Date: May 2016
Location: Rostock/Germany
Posts: 47
Quote:
Originally Posted by matthey View Post
<snip> DBF needs to predict the loop fall through or there is really no advantage to it over separate decrement and branch instructions, as I told Gunnar.
It does. Part of Gold2.

Quote:
Originally Posted by matthey View Post
It is nice to have more integer scratch registers but it is not worth having a messy and confusing ISA, IMO. The 68k has already confused enough programmers (especially compiler writers) with the An/Dn split and lack of orthogonality which offsets many of the gains in code density and substantial reduction in cache/memory accesses.
Compiler code is for convenience (and portability). Then again, we're here because of ASM fun. On topic, true GPRs would sure be a bonus in some places - if one can put this concept wholesale into the ISA as 16 Bit instructions. Otherwise, appropriate HW instruction fusing could work just as well and be compatible to existing software.

Quote:
It would be possible to reuse the encoding space if backward compatibility was not necessary and the SIMD was optional in the ISA. I guess that brings me to a question which you asked me once.
http://eab.abime.net/showpost.php?p=1104088&postcount=2
I suppose the chances of the SIMD being optional and there being some kind of CPUID support aren't very good .
Seems like ages back.
A CPUID work-alike is not off the table, that much I can say at this point. The timeframe for the implementation (announcement) is not fixed atm.

Quote:
Originally Posted by matthey View Post
Cool, Don's idea was workable .
Sure was. I just left out the neg.l and simply incremented d6 in the loop.
buggs is offline  
Old 18 February 2017, 11:11   #130
meynaf
68k wisdom
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon (France)
Age: 44
Posts: 2,134
Quote:
Originally Posted by buggs View Post
That such optimization is redundant with chipram writes was clearly stated in my initial post.
So you knew it full well but nevertheless acted in contradiction.


Quote:
Originally Posted by buggs View Post
Remember? "Whatever CPU". This "Whatever CPU" does actually have FastRam, along with planar video modes out of it. None of the other code posted in this thread has a comparable ratio between code size and amount of data processed (see MattHey).
This code already does 100 fps on unexpanded A1200. So on your "whatever cpu" you've gained 1% of a millisecond per frame. What an achievement.


Quote:
Originally Posted by buggs View Post
You've set ground rules and reference implementation. I followed them to the letter and provided an example how else one could implement the given procedure. I actually didn't mean to undercut Phx (I still owe him).
Thanks to him everyone can verify what I stated. Just add "machine ac68080" on top of the code and run through VASM. Wholly transparent.
VASM doesn't say if the code will actually work or not.


Quote:
Originally Posted by buggs View Post
Do I look like needing _you_ to trust anything?
So you came here just to attack me personnally. Thanks for confirming.


Quote:
Originally Posted by buggs View Post
Hilarious, coming from a dude who's in the habit of silently editing posts.
This is just plain wrong. You are insulting. First warning.
Sometimes when i missed something in a post i edit it asap. Many other people do that and it is normal. Does not occur often enough to call that a habit. Not my fault if you've rushed in between initial post and edited one.


Quote:
Originally Posted by buggs View Post
Shall I repost?
Repost what ?


Quote:
Originally Posted by buggs View Post
I've provided verifyable code, phx provided the tool to assemble it.
No it's not verifiable. I don't have a machine that can run it.


Quote:
Originally Posted by buggs View Post
You've actually seen code making use of it but all you bragged about was a Nop I left as a joke there.
So you leave useless instructions as a joke ?
A comment such as "TODO: awful, go away!" is a joke ?

Sure i've seen the code and seeing the instructions there, especially in the form of macros, doesn't tell what they can do.


Quote:
Originally Posted by buggs View Post
Shifting targets again, are we? Nowhere such condition was stated before.
I didn't state it as a condition. I just pointed that in real life, your benefit would have been zero.


Quote:
Originally Posted by buggs View Post
TRANSHi/Lo were implemented for 64 Bit transpose in a 64 Bit ISA to augment row/column algorithms.
Doesn't change the fact they're not documented in the "uptodate" page that was linked in the other thread.
And "touch" instruction that's occasionnally seen in that riva code, isn't documented either.


Quote:
Originally Posted by buggs View Post
Just because they apply to your call for proposals doesn't mean that they won't have a broader scope.
By definition, simd instructions don't have a broad scope.
In many cases, they can't be used. Even in their own domain (f.e. main loop of flac decode).


Quote:
Originally Posted by buggs View Post
Besides, didn't "Whatever CPU" mean "existing device" ?
I don't consider the 080 as "existing device". It's just software. It's not stable and will likely change in the future. Perhaps these instructions will just get removed when a better solution will be found.

So this appears to be 68k extension vs 68k extension. The fact one has been implemented by its author doesn't change a thing - after all i could just implement mine with an emulation library (which would then fail on the '080 due to encoding conflicts as quite funny side-effect) and then pretend it's "existing device" as well.

My proposals are better than what has actually been put in. I just used one of many useful extensions to show it.


Quote:
Originally Posted by buggs View Post
And I thought your UAE benchmark was lame already.
What a dirty game you're playing here. Second warning. Next time and i click on "report post".

My UAE benchmark just said that extra-large instructions are not necessary to have a fast cpu.

This is the main problem of this stuff : short-sightedness. Perhaps these instructions can provide some benefit *right now*. But change something in the architecture and it becomes wrong. You don't need simd for audio if you have a powerful enough general-purpose cpu. You don't need simd for video if you have a gpu. Simd extensions in cpus are technically archaic, and wrong way decided by marketing needs to write bigger numbers on the paper.



Quote:
Originally Posted by matthey View Post
It is nice to have more integer scratch registers but it is not worth having a messy and confusing ISA, IMO. The 68k has already confused enough programmers (especially compiler writers) with the An/Dn split and lack of orthogonality which offsets many of the gains in code density and substantial reduction in cache/memory accesses.
A messy and confusing ISA ? How do you dare to write that ? You're gonna be shot down in flames !
(As you can guess, i haven't said i didn't agree, of course.)


Quote:
Originally Posted by matthey View Post
I suppose the chances of the SIMD being optional and there being some kind of CPUID support aren't very good .
You don't need CPUID. There is a bit in SR to "declare" the use of these features. Just have all incompatible encodings trap if this bit is cleared, and the problem is solved.


Quote:
Originally Posted by matthey View Post
Your code density code writing to chipmem was really about the worst possible "contest" example. Way too specific and useless on any semi-modern CPU or hardware. It's not even worth arguing about .
Right that this example has been misinterpreted and abused and i could've been more careful, but i don't agree with "worst possible" and "too specific and useless". Not worth arguing about maybe, but not worth laughing either. The other guy on his comparison page may have done a few vacuities, especially about 68k, but what you've just written here isn't worth better.

Now if you think my example wasn't good, find a better one.
meynaf is offline  
Old 18 February 2017, 13:02   #131
meynaf
68k wisdom
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon (France)
Age: 44
Posts: 2,134
Once and for all...


Seems this thread is taking a slippery slope.
This has to stop, and to stop *now*.
Being the OP, it's kinda my thread so perhaps i can exert some control on it.

This is why, from now on, all personal attacks will be immediately reported to moderation.


My first example gives 100fps on bare 'ec020. But only if doing 32-bit mem accesses.
My goal was to check whether this code could be rewritten to be smaller, while retaining its performance on such a machine.
And, for academic purposes, how other cpu families would have handled it. I don't give a shit about an simd version of it.
Code mustn't be slower, but being faster is irrelevant. Code size with at least same speed was wanted.
Simple, fair and square.
Some have played that game fine.
But others gave irrelevant code and personal attacks.
Should we argue about this all day long ? No.
I explained what i wanted, i'm not native english speaker and perhaps i haven't been clear enough. Not a reason to make the thread go haywire.
I will not repeat this explanation -- just redirect to it, as many times as necessary.

Someone submitting an example in this thread sets up the rules. If he forgets one, he can add it later.
Some requirements can be added if it was clear in the original intent.
So don't argue on details.

I think it is clear now for everyone that I do have a strong distaste for all that in-cpu vector stuff (but not simd in general).
While we could argue whole days on it, it can only bring flame wars and if you like checking bit #55 of rAX after CPUID in all your programs and write two versions of every important routine, it's your problem - just leave the others in peace and don't do advertisement like that.
And here this is OT anyways.

It is my right to not accept code containing things that i regard as a bad decision, in the same way as anyone here could submit an example and say some specific things shouldn't be used.
If it is my example, I decide. If it is someone else's example, he decides.

When in doubt about the exact requirements, you can write "if..." like i did in my reply to robinsonb5's post (if the index isn't needed, if ccr is enough, etc).


Now things should be clear. We're all supposed to be adults in here, not kids, so behave accordingly. If you think someone does not, explain - don't criticize harshly. Easy or not ?
meynaf is offline  
Old 18 February 2017, 13:15   #132
grond
Registered User

 
Join Date: Jun 2015
Location: Germany
Posts: 187
So basically the rules are that in the end your code must turn out to be the best?
grond is offline  
Old 18 February 2017, 13:35   #133
meynaf
68k wisdom
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon (France)
Age: 44
Posts: 2,134
No. But according to the rules above, your post - which is a clear personal attack - has been reported to moderation.
meynaf is offline  
Old 18 February 2017, 13:46   #134
grond
Registered User

 
Join Date: Jun 2015
Location: Germany
Posts: 187
Well, yes, the other rule is that any disrespect to the King will be penalised.
grond is offline  
Old 18 February 2017, 13:54   #135
meynaf
68k wisdom
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon (France)
Age: 44
Posts: 2,134
No. It is just disrespect. Now can you stop this ?
meynaf is offline  
Old 18 February 2017, 14:40   #136
DamienD
Global Moderator

DamienD's Avatar
 
Join Date: Aug 2005
Location: Sydney / London
Age: 40
Posts: 7,752
Why do all these threads keep going downhill; it's usually the same people arguing???

I really don't have time to be reading through 7 pages of bickering between you guys... I need to prepare for a new job over the weekend so...

Closed for now until another GM has time to review.
DamienD is offline  
AdSense AdSense  
 


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools

Similar Threads
Thread Thread Starter Forum Replies Last Post
Generated code and CPU Instruction Cache Mrs Beanbag Coders. Asm / Hardware 11 23 May 2014 12:05
EAB Christmas Song-writing Contest mr_a500 project.EAB 64 24 May 2009 03:44
AmigaSYS Wallpaper Contest Calo Nord News 10 22 April 2005 10:33
Landover's Amiga Arcade Conversion Contest Frog News 1 29 January 2005 00:41
Battlechess Contest (EAB vs A500) Bloodwych Nostalgia & memories 67 14 August 2003 15:37

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +2. The time now is 07:54.


Powered by vBulletin® Version 3.8.8 Beta 1
Copyright ©2000 - 2017, vBulletin Solutions, Inc.
Page generated in 0.32160 seconds with 13 queries