English Amiga Board


Go Back   English Amiga Board > Support > support.Hardware

 
 
Thread Tools
Old 03 December 2008, 14:06   #81
hit
Registered User
 
Join Date: Jun 2008
Location: planet earth
Posts: 1,115
Quote:
Originally Posted by RichAplin View Post
[B]
Short answer:
Yes.
i went through all the texts you wrote, but take the short answer. simply great progress

Edit: while googling for atmel specs, i found a little site (sort of hobbyhorse showroom only ?!), which has some development boards based on atmels. worth a look, at least to get an idea, whats possble to build upon them.
( http://www.siphec.com/ )

Last edited by hit; 03 December 2008 at 14:14. Reason: forgot the url
hit is offline  
Old 03 December 2008, 15:17   #82
gizmomelb
Registered User
 
Join Date: Sep 2005
Location: melbourne
Age: 55
Posts: 541
richaplin - thanks for taking the time to write long, techie answers to things - I love reading these types of things even if I don't fully understand it
gizmomelb is offline  
Old 03 December 2008, 20:20   #83
RichAplin
Registered User
 
Join Date: Oct 2008
Location: san francisco/usa
Posts: 176
Quote:
Originally Posted by gizmomelb View Post
richaplin - thanks for taking the time to write long, techie answers to things - I love reading these types of things even if I don't fully understand it
Well we're still in the honeymoon period here; lots of geeky answers. At some point I'll start saying things like "...is 11024 bits because the pixies told me", and "deactivating the disk write line early is important because life just isn't fair" and then you'll know I'm hopelessly out of my depth.

;-)
RichAplin is offline  
Old 04 December 2008, 01:49   #84
RichAplin
Registered User
 
Join Date: Oct 2008
Location: san francisco/usa
Posts: 176
[pointless side idea]
I also saw that thing on the SPS site about the psygnosis format that really does uses a better encoding system than MFM which is cleverness. I bet they had a hard time duplicating it! Most of the duplicators I met were rough, hard drinking, swarthy men not particularly enamored of geeks inventing stuff to make their life harder, lose them money by screwing up their yields, and cut into their valuable friday night drinking time.

I was thinking we should have a fun competition for Cyclone20 owners to see who can design a disk format that gets the most usable data on a DD disk. To make it more fun we can say that the disk only has to read back on the original drive, i.e. the algorithm must work for everyone on their own hardware but the disks don't have to be interchangable. This is a silly game but somehow I think we could get huge increases because we have much better disk reading technology now.

Aaanyway..

I see only one unfixable problem with writing any disk perfectly; (reading a disk 'perfectly' i.e. heavily oversampled, is not a problem)

Guessing the track gap:
Trashing half a dozen bits when you turn off the write gate appears to be unavoidable, although I am about to do some experiments with writing so we'll see. The original Cyclone software was mostly just an algorithm to work out where the track gap should be.
Even if we can time-stretch the track bits to fit one revolution of the disk exactly (and this is clearly not hard with dithering) the instantaneous speed wobble of the drive and the erase head turn-off time will I expect mean we unavoidably junk a few bits at the 'seam'. We'll see for sure, I'll go play with it, but clearly this is a fundamental limitation of floppy drives. We'd need one that could turn off the write and erase heads fantastically quickly and even then microscopic rotational speed variations will probably hose us.

Oh hey... mind you...actually... I just had a thought on a really complex trick that might really help alleviate that speed variation thing.. but I'll save it till later. ;-)

Let's just say that is probably technically impossible to write a track that has no bit errors on it at all (unless you randomly get lucky and your trashed pattern matches what was 'underneath' it) - an error at the point where you stop writing is inevitable. Naturally you normally arrange for this error to fall in the 'gap'.
This fact alone nixes the idea of a copier that can copy anything without any knowlege of the format, of course.
I can verify this easily with the setup I have so I should probably go do that, but you know how I like to ramble...

AGC Noise
There is a second, more minor problem, which is a strategy to detect AGC-noise (one form of what people call "weak bits"; the other being PLL sync loss) from a completely unmagnetised section of disk, and come up with a way of generating a similar-reading pattern onto a disk that has already been formatted(!); i.e. ideally we'd be able to 'unformat' sections of the track. Playing with that will be great fun. ;-)

I see IPW and crew have support for tracks containing noise in IPF; although you don't have the "guess the track gap" problem because that's an artifact of imperfect disk writing process.

BTW everyone's being super helpful on this. Thanks! ;-)
RichAplin is offline  
Old 04 December 2008, 12:32   #85
IFW
Moderator
 
IFW's Avatar
 
Join Date: Jan 2003
Location: ...
Age: 52
Posts: 1,838
As for gaps; there are disks where the protection will fail if copying starts from the index pulse, and the track is filled with seemingly rubbish to make guessing the correct position practically impossible without actually seeing the code that reads it.
IFW is offline  
Old 04 December 2008, 13:36   #86
philpem
 
Posts: n/a
Well if this thread isn't a lesson for me to stop being so lazy, I don't know what is!

About two years ago, I hashed out the first iteration of something very similar to this -- a CPLD, a RAM chip and a disc drive interface, which could (theoretically) be used to read and write any format of floppy disc. The HDL code passed the test routines, and the syncword detector worked pretty well in hardware as well (I flashed it into a Xilinx XC9572XL gate array, wired up a floppy drive and watched it on a logic analyser -- the LA traces are online at http://blog.philpem.me.uk/?p=129). The whole project stopped dead when I realised a 288-macrocell CPLD wasn't big enough to hold all the logic.

At one point, I was in contact with a few of the SPS bods about what would actually be required, hardware-wise, and why the Catweasel wasn't suitable. Much swotting up later and I came to the conclusion that my hardware wouldn't manage it either (the CPLD was too slow) and shelved the project.

Main reason I wanted to do mine was because the CW was effectively a closed platform. I wanted to go the other way -- a completely open hardware platform. Make the hardware, and if it doesn't do what you want, change the HDL and firmware as you need to.

I'm working on the software side of things as part of a university project -- at this point I'm aiming for something that can read and write FDI files to/from discs, and maybe master discs from a data file and a format specification ("an AmigaDOS disc has 11 blocks, 80 tracks, 512 bytes per block, headers look like this...").

As a result of the software work, I'm having to finish off the hardware -- I'm using an Altera Cyclone2 FPGA (on an Altera Cyclone2 Development Board, aka Terasic DE1), a PIC18F4550 (for the USB interface) and a bunch of 74LS07s (FDD buffer/interface circuitry). At the moment, it's back to working on the testbench, but isn't working on the hardware. Yet.

As far as data storage goes, it works like the Catweasel. A counter measures the time between flux transitions. The difference is that the clock rate on my box's counter is about 20MHz (with 40MHz as an option). That's 50ns bitcell accuracy (same as a Trace) with 25ns as a firmware option. Timing accuracy is currently 8-bit, but can be tweaked higher. Each sample is 16 bits, LSByte is the timing value, MSbyte is a status byte (counter overflow, index, and a few others I can't remember right now)

Writing back is done with a finite state machine / microsequencer. Effectively it's a special-purpose computer -- you can make it wait for an index pulse, a track-mark (hard-sector track mark, that is two index pulses within x milliseconds) or a number of consecutive index pulses. That makes it a lot easier to write to hard-sectored discs.

I posted a lot of the development notes to the Classiccmp mailing list -- www.classiccmp.org. If I can help you out with your project, just yell. I don't believe in hoarding information
 
Old 04 December 2008, 20:17   #87
RichAplin
Registered User
 
Join Date: Oct 2008
Location: san francisco/usa
Posts: 176
Nice! Yes I see a number of people have done work on home-made disk controllers over the years; it's basically just a high-speed 1-bit sampler as you point out.

I've been fortunate enough that my flash of interest in the project roughly coincides with the availability of extremely cheap fast microcontrollers with built-in everything so I'm just doing the whole thing in software as you see.

Other controllers
What's nice is that with all of these parallel solutions (yours, mine, catweasel, etc), they basically all do the same thing, so the majority of the work (the PC software) can be made compatible with a variety of controller designs.
Our baseline spec is clearly ~50ns resolution sampling and playback with accurate index measurement and.. erm.. that's it.
I reckon we're dealing with 50-90k flux transitions per track for DD disks (depending on encoding), and everyone's happy with 8-bit-with-overflow-flag for the sampling.

Various designs can meet that requirement, so to make them all work from one PC app is probably not rocket science. Clearly we'll have all designed different protocols for talking to our controller boards, but life would be no fun otherwise.

Generic raw disk toolbox - "Mr. Floppy"
What I'm sort of personally steering towards at the moment is a C# windows app that's a kinda of 'super toolbox' for floppies - lets you zoom in on the tiniest timing detail, see graphs of density, cell histograms, speed wobble, formatting, etc. This is mostly to scratch my own itch because I feel like it, and to provide some sample code for people to hack apart and do whatever with.
C# is basically awesome for this kinda thing, it's like Lego for Windows. I appreciate there may some throwing up of hands in horror at such a concept, but hey you can always fork/rewrite in whatever language floats your boat. Personally I've found nothing better for writing desktop apps in half an hour with no dicking about, and hey I'm not making anyone do anything here. If it dies on the vine because people aren't interested in writing C#, that's fine, someone will pull out the important bits and convert them to C or whatever soon enough.

Istvan and team have done an amazing amount of work understanding and decoding disk formats and I see no reason to repeat it of course. One of the things clearly agreed is that everyone is interested in adding support for our hardware to their existing code.

It would be great if more of the SPS code was available to the general public (so we can have a kinda open-source MAME for floppies) but I know as well as anyone there's much work and hassle involved in doing that. Also not sure anyone can be bothered to convert C/C++ to C# just to fit in with my desktop app, so as mentioned above my C# plaything's lifespan may be limited and someone else may write just put my functionality in their app instead. It's all good.

BTW as for scripting languages for disk formats; I've written so many frickin interpreters in my life I long since lost count, and can't be arsed to do another one. For "Mr. Floppy" I might use Lua. We could have a couple of text entry boxes (one for encode, one for decode) where you can paste a Lua script, e.g. encodes ADFs to MFM, whatever floats people's boat, and hit "Run" and it'll read or write your track as desired. Sounds like a nice bit of duct-taped fun. I just googled "c# lua" and the top result contains the phrase "so easy that it's almost embarrassing".


I actually GOT MY ARM BOARD TODAY... but it's the one I ordered for a different project. This one has an 800x600LCD, a gig of ram, and can emulate the whole Amiga not just the floppy drive - but there's no fun in that. ;-) Back to the Atmel!

Last edited by RichAplin; 04 December 2008 at 21:56.
RichAplin is offline  
Old 04 December 2008, 22:50   #88
mr.vince
Cheesy crust
 
mr.vince's Avatar
 
Join Date: Nov 2008
Location: Hawk's Creek
Age: 48
Posts: 1,383
As long as this works and as long this is open source... anyone can program their own hardware in whatever they want. Once the methods on how to do it are there, we're all fine. A good story can be told in any language.

If anyone wants to rewrite it in assembler - well, be my guest. As long as you find it easy to debug and maintain...

Did I mention that my board left the Ukraine today? I wonder if I will be getting it before the holidays...

Best,
Chris
mr.vince is offline  
Old 05 December 2008, 19:02   #89
a500l0ver
 
Posts: n/a
I'm assuming you guys have seen

http://www.techtravels.org/amiga/amigablog/

right?

Parallax SX28 microcontroller at 50mhz
32K of FRAM
USB to TTL converter (3mps, but running at 2mbps)

External USB amiga floppy drive controller. Application on PC side is a Java app. Creates .ADF's from floppies using a standard PC drive.

I have used two different read methods

original: dual interrupt ISR that triggers based on a 2us timer or a low-going pulse edge. store timer overflows as a '0', store edges as '1's

new: time between edges define the data. Wait for an edge, start a timer, wait until the next edge. subtract. if the time between edges is around 4us, then store '10', around 6us, store '100', around 8us, store '1000'

(someone else mentioned doing oversampling: you can do this, but keep in mind you really need to sample at least every 50ns or so. While the bitcell is a full 2us wide, the actual drive pulse is about 250ns, and you'd want at least a few samples of that pulse. 50ns samples generates a huge amount of data for 203ms, the approximate time you need to sample to get a whole track. That's 4 million samples * sample size = decent amount of memory)

entire track is read into FRAM, then transmitted to PC at 2mbps via USB.

The project works fine for the most part --- I'm trying to make it more robust now including adding advanced error correction based on intelligent(based on hamming distance to known good bytes) brute forcing of bad bytes. I'm also trying to determine how exactly old floppies fail, and see if I can incorporate that into the routine.

Everything is open source.

I'll have to go back and reread this full thread -- but its nice to see other people working on similar projects!

Thanks
 
Old 05 December 2008, 20:12   #90
RichAplin
Registered User
 
Join Date: Oct 2008
Location: san francisco/usa
Posts: 176
Hiya!
Yes I saw your pages, nice work on that. The UART thing isn't the way to go (I have an FTDI uart-usb 'working' at 2mbps but the PC side overruns when you really pump 2mbits into it), so yes I'm tapping my fingers waiting for my ARM CPU with real USB to come along.

You mention the sampling / data (and hence RAM) issue and it's a good point so let's lay that one to rest properly:

To quote Mr Gates
Nobody will ever need more than 640k of Ram

In fact, 256K would be ample, even with lots of buffering going on, but hey. I think 64K will be fine in a pinch.

Somebody please correct me if I'm wrong, but I believe that:

Oversampling the disk data line and generating a lot of data is not any great use to anyone. Measuring the length of the '1' data pulse from the disk drive is also not useful. All that you need to do is measure the timings from positive edge to positive edge (in practice the actual signal is inverted, so falling to falling) with as much precision as you possibly can, and zero jitter. Handily most microcontrollers have capture inputs designed for exactly that.

If you _don't_ have hardware capture then yes, you have to run the CPU in a tizzy reading the line as fast as you can (what SPS has been doing with their Amiga) and the best way to do this in software with a high sampling rate and low jitter is to have a loop that generates a mountain of samples as fast as it possibly can, and then sort through them later pulling out the signal.
All this work and data is still not as good as simply using a hardware capture pin with a timer running at CLK. ;-)

The reason you only care about the +ve ediges and not the duration of the 1 bits is because the disk drive is making them up anyway! The drive electronics contains amplifier, pulse shaper etc; the '1' output of the drive is just a digital tick indicating there was a flux transition on the disk; the duration of the tick is not interesting as it's just some time period defined by the drive controller chip manufacturer. On most datasheets it's 200-1000ns. As long as you see it happen you're fine.

So, all in all, generating lots of sample data is pointless. The most data we'll ever need about a track is just this exact time between these transitions and the timing of the index mark. We're all using around 50ns, so time samples from a double density disk fit easily in a byte (most people have an overflow bit also).
The number of transitions varies on a number of factors but is generally 50-100k for a double density disk.

Beyond that, reading any disk it's all about just decoding the samples back into real data according to whatever the original format was. I did Amigados in about half an hour and others like Atari ST (anything WDC1772) etc are equally straightforward.
IPW has quite a treasure trove of disk format decoders by the sound of it.

Writing a disk is exactly the inverse process of course, but still only the timing edges are important.

Rich.

Last edited by RichAplin; 05 December 2008 at 20:18.
RichAplin is offline  
Old 05 December 2008, 20:51   #91
mr.vince
Cheesy crust
 
mr.vince's Avatar
 
Join Date: Nov 2008
Location: Hawk's Creek
Age: 48
Posts: 1,383
I assume you are right.

The only thing that just came to my mind (Claus explained this to me nearly two decades ago, so please be gentle): Weak bits were done - as far as I remember - not by changing the strength of the magetization (resulting in different reads each time a track was read). Instead a continuous amount of identical signals (not sure if "0" or "1") was written, cheating on the drive electronics, which due to a lack of proper syncing started giving back erroneous results.

If the above is right (which I am not sure of, because I haven't seen such a disk in years - maybe I should take a look in Beermon using WinUAE... well might try that later this weekend), will this in any way affect your readout by checking the flux transition only?

I say no, because in the end, the transition is the only thing you were using on real silicon, too. So as long as the timing regarding the flux transitions is fine, I see no showstopper.

Comments, anyone?


BTW: We have not sorted out how to write back an unformatted track. Had some thoughts on this...

An unformatted track should be equally balanced regarding the orientation of the magnetic particles on the surface. To regain this state, while writing to magnetic tape (on a cassette recorder, on a 2" studio machine, whatever), a very high noise pattern is applied to "shake" the particles and have the rest in random orientation. I wonder if it will help to turn on a random generator, writing the signal at the highest possible rate we can go. This is well below the normal bitcell width and should therefore, when done several times, erase any content that was there before. I still have not figured out how to "lower the volume". On an unformatted track, the particles are not only pointing north and south, they have no orientation at all. Therefore, real noise should be much more "silent", which forces the drive to turn up the volume (AGC) while reading. If I am right, the whole process of setting up the correct driver voltage is up to the drive. So I assume there is nothing we can do about the current we are using to induce the magnetic field in the head. I only see a chance to alter it by changing the drive electronics. I am however very sure that modern drives will not let you simply because this voltage is controlled by some kind of AGC circuit. I remember that the drives used for writing long tracks were NEC FD-1035 ones because they did not control the speed of the drive afterwards (and so could be slowed down to make room for a higher density).

Any thoughts on this, too?

Best,
Chris
mr.vince is offline  
Old 05 December 2008, 20:56   #92
RichAplin
Registered User
 
Join Date: Oct 2008
Location: san francisco/usa
Posts: 176
By the way I got track writing working and as expected it's right on the button.
Cool writing works as expected.
Just test data the moment but it writes and reads back perfectly and you can dial any bitcell size you want. Nice.

Also just successfully wrote (and verified) a test track with 1.37us bitcells. Heh that's some serious long track action!

So MFM is giving us 1 actual usable data bit per 4us. That's hardly any fun at all.

Amusingly we can nowalso invent all sorts of things; another format that appears to work is simply because we have such high precision is encoding data as increments of cell time; for example you store a '0' as a 1.4us cell, a '1' as a 1.8us cell, etc. If you go up to 7, your longest cell is say 10us. If your track is all '7's, you're storing 200,000/10 = 20000 * 3 bits = 7.5k per track.
Best case, you're storing all '0',s you get 200,000/1.4 = 53k per track.
Average would of course be all over the place between those numbers.
This is, admittedly, an absolutely stupid idea for many reasons, is a pointless exercise, won't read on any normal machine, and "will never work" in practice, but it makes me laugh. Fortunately now I can actually go code it up and see how it works. ;-)

Anyone fancy writing a read/write filing system for a disk with variable capacity sectors?
Clearly a read-only disk is very possible though. I assume that's somewhat along the lines of what psygnosis did.

Oh the fun of it all... I'll put up some pics of my 145,807 bit long track, about 140% longer than a normal one. Not remotely useful, but it made me laugh.
RichAplin is offline  
Old 05 December 2008, 20:56   #93
a500l0ver
 
Posts: n/a
Quote:
Originally Posted by RichAplin View Post
Hiya!
Yes I saw your pages, nice work on that. The UART thing isn't the way to go (I have an FTDI uart-usb 'working' at 2mbps but the PC side overruns when you really pump 2mbits into it)
The FTDI does have flow control pin and this can help things. If the PC is backing up, then it drops DTR, which you can sense on the uC and handle appropriately.

This worked well at first, and I'm not sure why all of a sudden I'm having troubles. This is why I'm implementing a smarter xfer protocol now.

Quote:

All that you need to do is measure the timings from positive edge to positive edge (in practice the actual signal is inverted, so falling to falling) with as much precision as you possibly can, and zero jitter.
And actually, you don't need much precision. You just need to know approximately how much time is between the pulses. Because of pre-write compensation, fluctuating drive speeds, etc, the time between pulses isn't exactly 4, 6, or 8us apart. With my new method, I just time the distances between pulses, and then write the real mfm to the memory.

Someone had mentioned using tokens like 00=10, 01=100, 10=1000, and so on to save space. You can do this (I actually tried this too), but it turned out to be a huge PITA. I like the fact that I store and transfer actual raw MFM that is properly byte-sync'd. I sync to $4489 during the transfer process. There is enuf BS decoding that has to happen later to add yet another decoding step.

Quote:

The reason you only care about the +ve ediges and not the duration of the 1 bits is because the disk drive is making them up anyway!
Right. You just want to know when a pulse transition occurred. The actual pulse width is not important.

Quote:

So, all in all, generating lots of sample data is pointless. The most data we'll ever need about a track is just this exact time between these transitions and the timing of the index mark.
I've found that the index mark really doesn't matter. Or I should say isn't useful. Because the index mark doesn't tell you anything except that you're passing over a particular part of the disk. Remember the sectors are not written in order from 0-10 starting at the index mark. So all I do is count bits, and I end at 112,440. You need to record at least 104,440 bits, I record an extra few bits. You need (1088 bytes * 12)-1 bytes to deal with the situation where you started reading in the middle of the sector (which is always the case).

Quote:

The number of transitions varies on a number of factors but is generally 50-100k for a double density disk.
I should run some averages on the raw data. I'd say the "10" is the most common of the groupings. Just remember that amiga's splitting of odd and even bits across half a sector hard to visualize raw data from original.

Because of the way MFM encodes, there is structure to the raw data:

http://www.techtravels.org/amiga/amigablog/?p=62

That might be interesting for you to know. This really has great implications for error detection and correction.

http://www.techtravels.org/amiga/amigablog/?p=191

Thanks
 
Old 05 December 2008, 21:07   #94
mr.vince
Cheesy crust
 
mr.vince's Avatar
 
Join Date: Nov 2008
Location: Hawk's Creek
Age: 48
Posts: 1,383
But only as long as we rely on the disk being ok, right? I assume that copy protections can use illegal combinations?
mr.vince is offline  
Old 05 December 2008, 21:17   #95
RichAplin
Registered User
 
Join Date: Oct 2008
Location: san francisco/usa
Posts: 176
Right, if you want to just send decoded MFM to the PC, you can do it readily with a UART and minimal buffering.

I got the Atmel sending decoded MFM to the PC and then decoding into sector data no problem as the data rate is only 0.5mbps (the FTDI runs fine at 1mbps) so no need for RTS CTS so far. I just used a lookup table in the to convert cell timings into MFM bits which gives you the ability to program your own PLL windows.Equally, writing regular MFM tracks of any flavour works fine without flow control, so getting read-write of straight amigados disks worked pretty much out-of-the-box on an Atmega128 at 16Mhz.

This project is all about copy-protected disks; we're after exact timings of the entire track; we're looking to reproduce tracks exactly as on the original disk, and that includes catering to a whole bag of different protection systems. These use all sorts of tricks like variable bitcell timings and so on.
Hence we want to have a series of cell timings that when added together equal exactly the total rotation time from index-index. All the numbers should add up correctly, and our ability to copy disks is only limited by minute speed wobble and the erase head turnoff time on the drive.

This requires somewhere around 2mbps data rates (less if you use 7N1 instead of 8N1 serial format, which works) and/or a decent amount of ram to buffer, depending on latency, how much you compress it, etc,etc. Ultimately I figured using a UART for PC comms isnt' the best path; we could use a $15 Atmega pirate TV card (see previous posts) if we were ok with an exteral FTDI as well, but it's very marginal at those speeds and clearly a CPU with real USB is a better bet, is easier to wire up etc.
The idea is to make a universal floppy controller with one chip and about 15 wires that anyone can make at home (much like the original Cyclone); obviously there's many more complex things you can build that do much the same thing (e.g. big expensive Trace machine ;-) )

Also I propose that our device shoot frickin' laser beams; research indicates no others have dared take their FDC designs to this level.

Yay,
Rich.

Last edited by RichAplin; 05 December 2008 at 22:07.
RichAplin is offline  
Old 05 December 2008, 21:47   #96
philpem
 
Posts: n/a
Quote:
Originally Posted by RichAplin View Post
This basically requires a little over 2mbps data rates (less if you use 7N1 instead of 8N1 serial format, which works) and/or a decent amount of ram to buffer, depending on latency. Ultimately I figured using a UART for PC comms isnt' the best path; we could use a $15 Atmega pirate TV card (see previous posts) if we were ok with an exteral FTDI as well, but it's very marginal at those speeds and clearly a CPU with real USB is a better bet.
Which is pretty much why I've been using a PIC18F4550. USB is a host-centric protocol. That basically means the PC does *everything* -- the device can't just say to the PC, "Hello, I've got some data for you", it has to wait for the PC to poll the device.

All RS232 controllers fire an interrupt when data arrives. On top of that, the 16550 UART used in most PCs has a 16-entry FIFO buffer...

But anyway, back to USB for a bit. A USB device can have up to 32 endpoints, 16 inputs and 16 outputs, and each can be one of three different types -- Bulk, Interrupt or Isochronous. You can also have Control endpoints, but I'll ignore these for now.

Bulk endpoints are your bog-standard read/write type endpoints. They're not considered timing-critical, and are used for transferring blocks of data. The PC initiates all transfers. Data is guaranteed to arrive at its destination, even if the PC or device has to send the data a few times to get it through. Bandwidth and latency aren't guaranteed. At all.

Interrupt endpoints are similar, but the PC polls them every so often (usually between 10 and 100 milliseconds, but it isn't guaranteed) to see if there's some data waiting. So the USB device has to buffer everything that arrives between it waving its little "I have data!" flag and the PC spotting the flag and grabbing the data. You typically see these in mice, keyboards and other USB human-interface devices.

Isochronous endpoints transfer a fixed amount of data every X amount of time. These are typically used for audio/video and other streaming media devices -- they're the only transfer type that has a guaranteed bandwidth allocation, but there's no error correction or retrying. The idea is that the user probably won't notice a dropped packet here and there, but they're sure to notice the fact that (say) the video is out of sync with the sound! Better to lose a mangled packet and keep A/V sync than to re-send the packet, lose sync and have to take action to recover A/V synch.

Quote:
Originally Posted by RichAplin View Post
Also I propose that our device shoot frickin' laser beams; research indicates no others have dared take their FDC designs to this level.
I was going to do that, but Doctor Evil beat me to it. Although he did it with sharks, not disc analysers...
 
Old 05 December 2008, 21:50   #97
RichAplin
Registered User
 
Join Date: Oct 2008
Location: san francisco/usa
Posts: 176
Quote:
Originally Posted by mr.vince View Post
...Instead a continuous amount of identical signals (not sure if "0" or "1") was written, cheating on the drive electronics, which due to a lack of proper syncing started giving back erroneous results.

...
BTW: We have not sorted out how to write back an unformatted track. Had some thoughts on this...
Good news on both of those.

Weak bits are as we discussed either AGC noise or the PLL losing lock (what Claus was referring to was the latter). Both are no problem.

Unformatted tracks also appear to be not an issue; I just turned on the write gate and let it spin for a few revolutions and didn't write any bits and the histogram of the track when I read it back looked just like AGC noise. (not scientific)

Rich.
RichAplin is offline  
Old 05 December 2008, 21:55   #98
philpem
 
Posts: n/a
Quote:
Originally Posted by RichAplin View Post
Unformatted tracks also appear to be not an issue; I just turned on the write gate and let it spin for a few revolutions and didn't write any bits and the histogram of the track when I read it back looked just like AGC noise. (not scientific)
That makes sense.

What you've effectively done is write a track with no flux transitions -- the whole track is magnetised the same way. On the read pass, the drive couldn't read anything back, so it kept increasing the gain on the amplifier. After a certain point, all it'll return is random(ish) noise.
 
Old 05 December 2008, 22:04   #99
RichAplin
Registered User
 
Join Date: Oct 2008
Location: san francisco/usa
Posts: 176
Quote:
Originally Posted by philpem View Post
Which is pretty much why I've been using a PIC18F4550. USB is a host-centric protocol. That basically means the PC does *everything* -- the device can't just say to the PC, "Hello, I've got some data for you", it has to wait for the PC to poll the device.

All RS232 controllers fire an interrupt when data arrives. On top of that, the 16550 UART used in most PCs has a 16-entry FIFO buffer...
Right but the UART inside most PCs multi-io chips doesn't even have register settings for that kind of data rate (I think?) - if it did, I'd suspect that the level shifters and EMI stuff on the signal inputs is going to knacker us at that kind of speed.

Mostly though most modern laptops don't have serial or parallel port at all, so most people are using a USB-serial converter, so in real terms you've got nowhere. ;-)

Yes, with USB being a polled protocol we still have a bunch of buffering to do for sure; there is the possibility of buffer overflow but it's rather remote; with 64KB of RAM we can buffer nearly the whole track (depending on various factors). If a track read fails due to overflow we just reread, and a track write doesn't have to even start until we have about 80% of the track data. At that point we need to ensure get the rest of it in less than about 180ms, which I am expecting to be fine. We can also compress the data of course, cos the ARM's fast enough.

We're only running USB 2.0 'full speed' (12mbps) but I don't expect it to be an issue because of our reasonable buffer size.

The ARM talking USB with 64k ram is roughly equivalent to using an FTDI uart-usb chip with a 64kbyte buffer, and it appears that FTDI's come with a lot less than that.

Interesting to see if there will be problems with USB overruns but I am guessing probably not.
Rich

Last edited by RichAplin; 05 December 2008 at 22:11.
RichAplin is offline  
Old 05 December 2008, 22:09   #100
RichAplin
Registered User
 
Join Date: Oct 2008
Location: san francisco/usa
Posts: 176
Quote:
Originally Posted by philpem View Post
That makes sense.

What you've effectively done is write a track with no flux transitions -- the whole track is magnetised the same way.
To be wildly pedantic I think if I recall physics classes correctly the magnetic domains are just randomly aligned by the high frequency erase signal; but the end result is the read head reading 'no flux'.

My $15 Pirate TV cards (ATMega 128) arrived today, with a handy assortment of extra crystals (incl 24Mhz).
Thing is you have to add USB-uart is most cases (or at least a level shifter if you have a PC with serial port and as discussed 1-2mbps is a bit fast for most regular PCs I am guessing)

You could just solder an SD card connector on there and have a disk-to-disk copying or IPF-image playback device. Buuut I still love my lil' arm chip with its sexy PLL and stuff. When it frickin gets here that is.

Last edited by RichAplin; 05 December 2008 at 23:24.
RichAplin is offline  
 


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools

Similar Threads
Thread Thread Starter Forum Replies Last Post
Watch out for our competition to win the new Cyclone VX PS gamepad 2 Amiga controller Mounty Retrogaming General Discussion 0 15 August 2013 08:21
idea about WinUAE-based tool vulture support.WinUAE 12 15 February 2013 20:15
KryoFlux USB Floppy Controller (was: C2 DiskSystem) IFW project.SPS (was CAPS) 146 27 June 2010 17:07
Homemade controller/joystick? DrF support.Hardware 5 27 August 2007 11:48
Amiga nd the CatWeasel Floppy Disk Controller wibble82 support.Hardware 4 17 May 2002 20:13

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +2. The time now is 23:17.

Top

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.
Page generated in 0.15030 seconds with 14 queries