View Single Post
Old 24 December 2007, 10:50   #19
meynaf
son of 68k
 
meynaf's Avatar
 
Join Date: Nov 2007
Location: Lyon / France
Age: 51
Posts: 5,350
Quote:
Originally Posted by Thorham View Post
I can't remember how often I've used that method, too
And you can't do that on nowadays machines.

Quote:
Originally Posted by Thorham View Post
Very tough to find simple explanations for mpeg decoding However, I was able to work out that the huffman code seems to differ from normal huffman code in that it outputs variable length data (has to do with the scaling factors if I'm not mistaken) instead of fixed length data. This has lead me to believe that the only thing that happens during the huffman decoding stage is scaling the data to some fixed length. If this is correct, then it should not be to hard to extract this from one of the source codes you have, and make your own routine based on this.
There are more computations than that : the code also performs on-the-fly requantization.
According to libmad's layer3.c :
Code:
 * The Layer III formula for requantization and scaling is defined by
 * section 2.4.3.4.7.1 of ISO/IEC 11172-3, as follows:
 *
 *   long blocks:
 *   xr[i] = sign(is[i]) * abs(is[i])^(4/3) *
 *           2^((1/4) * (global_gain - 210)) *
 *           2^-(scalefac_multiplier *
 *               (scalefac_l[sfb] + preflag * pretab[sfb]))
 *
 *   short blocks:
 *   xr[i] = sign(is[i]) * abs(is[i])^(4/3) *
 *           2^((1/4) * (global_gain - 210 - 8 * subblock_gain[w])) *
 *           2^-(scalefac_multiplier * scalefac_s[sfb][w])
 *
 *   where:
 *   scalefac_multiplier = (scalefac_scale + 1) / 2
Not simple, really

Quote:
Originally Posted by Thorham View Post
After searching the web for a while, it began to dawn to me that you have two choices: 1. Go through trouble of learning how layer 3 really works, including the math. 2. Go through the trouble of understanding someone else's source code. Personally I'm not a math guy, as you know, so I would definitely go for option two. Of course, you probably came to the same conclusion
I did. Definitely option 2.

Quote:
Originally Posted by Thorham View Post
I can certainly have a go at it However, the sound output is just about the last thing that should be optimized, because of the low bandwidth requirements cd quality sound has, only about 176kb per second when in raw format. Although I really don't mind having a go at it (and could actually enjoy doing so), the cpu intesive parts (read: the hard parts) are the parts where the real profit is. But you knew that, didn't you
The part of that stuff is similar to the ham rendering as compared to the jpeg decoding proper, so it's not useless to check.

Remember that we can't play that 16-bit 44.1 data directly ; we have to downsample it before, and prepare it for 14-bit output. My code does this in 5:3 instead of the usual 2:1, leading to 26460hz instead of 22050 (better quality). But, of course, this takes some time.

When I use mpega I'm often at 95% cpu use (when there aren't gaps in the replay !), so it's worth removing whatever we can.

This code must write to chip memory, and there are nasty divides in it. You sure know these things aren't fast
meynaf is offline  
 
Page generated in 0.05012 seconds with 11 queries