English Amiga Board

English Amiga Board (https://eab.abime.net/index.php)
-   support.Hardware (https://eab.abime.net/forumdisplay.php?f=20)
-   -   How to clone and modify an RDB onto a different drive? (https://eab.abime.net/showthread.php?t=73576)

fgh 04 May 2014 02:27

How to clone and modify an RDB onto a different drive?
 
Good evening, Wizards of amiga OS!

I have a 8GB SD card that I want to clone onto a 16GB card.
If I'm not mistaken, writing the backup image of the 8GB card onto the first half of the 16GB card should work, except the RDB drive size would be 8GB.
Is there a way to change the RDB drive size on the new card without loosing the partitions?

If there is no clever way to do it, would it work if I started over with a new RDB with same cylinder size and filesystem, and simply added the partitions at the same positions?

(I know I could avoid all this by copying the files to the new card instead of writing back the image, but I'd be happy not to, if possible.)

By the way, is there a way to increase a partitions size without loosing its data?

Thanks!

Skylight 04 May 2014 02:35

Why u want to clone a CF Card on a bigger one?

It's much easier and even faster to prepare the new CF card with HDToolBox and then copy the contents from the old CF card to the new one (with DirOpus)!

EDIT: I'm doing this in WinUAE with a CF card reader.

fgh 04 May 2014 02:46

Thanks for your input Skylight. I've done it by copying files before.
I disagree though - if there is a simple way to edit the new RDB this would be 10x quicker.
I'm also simply curious if it is possible..

Skylight 04 May 2014 02:54

When i'm doing it in WinUAE it's fast enough. Even with several GB of data.

And i can't remember a tool that would let you change the drive geometry or even the partition data in RDB.

And doing it with a disk editor would be too difficult and even much slower than just copying everything.

fgh 04 May 2014 03:04

I agree a disk editor is too difficult.
Does anyone else know of a tool that lets you edit the rdb in such a way?

Jope 04 May 2014 07:13

Try just increasing the last cylinder value with hdtoolbox in the drive definitions until you reach the new card's size. Then make a new partition after the first one. Don't touch the other values. :-)

delshay 04 May 2014 07:34

I did damage the RDB on one of my cards and it completely stop working (CF card),but because I had a second identical card I just use used the hard drive set-up software on the working card,but before I pressed save,I quickly removed the working card,insert the faulty card and press save.

Card is still working to this very day.

NOTE: this works on my SCSI-PCMCIA device,but not sure if it will work on other similar devices.

Vot 04 May 2014 08:26

Im tempted to write something. I just put a fastata in my machine. Of course it wont boot with the old cf card setup.

I was tempted to see if i could make a tool to modify the rdb for the new geometry used by the fast ata.


Although as stated.. Just as easy to repartition and copy file system contents back...

robinsonb5 04 May 2014 10:42

Block-by-block copying onto flash devices isn't a great idea because it can mess up the wear-levelling (long story short: the drive's much happier with a quarter of the blocks written to 10 times than all the blocks written to once.)

Since you're only talking about block-by-block copying to half the device it won't matter in your case, but in general it's better to do file-by-file copying than block-copying.

Vot 04 May 2014 12:14

As mentioned in another post you can use ddpt that is write sparing... So it only writes differences.. Of course the first time you write it will probably write the whole card.

http://sg.danny.cz/sg/ddpt.html#mozTocId600965

Look at the write sparing section

fgh 04 May 2014 15:57

Thanks guys!

Jope: If I edit the drive definitions in HDToolBox, I have to re-enter the partitions.
As far as I understand that would work though. It would be similar to setting up the empty 16GB RDB with same cylinder size,fs and partitions, saving RDB to file, writing the 8GB image and then restoring the 16GB RDB.
That's still a bit of work though. I'm a little surprised there is no software to edit the active RDB, considering how much hacking, modifying and tinkering there has always been with the amiga.

Delshay: Good job! Guess you could also save the RDB of the other card to a file and write it to the damaged card.

Vot: Let me know if you write something :) If there would be lots of writes to the same card write sparing would indeed be nice..

Robinson: Thanks. I'm aware of the issues with wear levelling. I always quick format, and keep an area at the end of the card unpartitioned to give the cards wear levelling unused blocks to play with.

fgh 01 November 2016 12:31

Answering my old question here, but for future reference, Thomas' fixhddsize does exactly what I was looking for, expanding the RDB geometry to the full size.
It will not expand any partitions, just the size available in hdtoolbox, so you have to add a partition afterwards youself

(The program was created with a different problem in mind, where adding a large IDE harddrive on an unpatched system would lead hdtooolbox to only detect 7,87GiB (CHS maximum))


When upgrading to a bigger CF/SD card, size usually doubles, so block by block writing shouldn't have that much of an effect on lifespan.
If you want to maximize its lifespan, copy files instead though.

demolition 01 November 2016 13:49

When copying a full card from one to another file-by-file, it could be a good idea to first create an empty image the size of the destination and mount that image in WinUAE to copy all the files to.

Then you can use dd/ddpt/WinImage to write that image to the final card. It will minimize the number of writes being done to the card. I guess the file system table will be updated quite often when writing lots of small files which increases the amount of data being written quite significantly. Lots of buffers should help in that regard though.

idrougge 02 November 2016 01:14

Writing an image means writing the maximum number of possible writes to the card.

demolition 02 November 2016 06:25

Quote:

Originally Posted by idrougge (Post 1120076)
Writing an image means writing the maximum number of possible writes to the card.

Filling up a card by copying files one by one will do a lot more writes since the file system will be updated after every single file. Caching will reduce this a little, but in such a case I'd much rather work with an image and then write the image back to the card using ddpt. Whatever bits needs to be changed on the card is then only written once, so this is the most optimal way of updating a card.

robinsonb5 02 November 2016 13:36

Quote:

Originally Posted by demolition (Post 1120083)
Whatever bits needs to be changed on the card is then only written once, so this is the most optimal way of updating a card.

That's certainly the way to minimise writes to the flash device, and the best way to treat a card that doesn't perform any kind of wear levelling. If the card *does* support wear-levelling, however, then it's not such a good strategy.

A brand new flash device that supports wear levelling has all blocks marked as unused. As soon as a block is written to, it's off-limits for wear-levelling until it's written again, at which point it's swapped for the "freshest" block from the unused block pool.

If you do a file-by-file copy of a filesystem that's only 25% full, you will have written once to, say, 24% of the blocks, many times to 1% of the blocks (and wear-levelling will have spread those out), and 75% are still available for future wear levelling.

If you do a block-by-block copy of the same filesystem, you'll have written to 100% of the blocks only once, and have none at all (except for however many spare blocks the device has) available for future wear levelling.

You may well have done fewer writes to the device in the block-by-block scenario, but the device is in a healthier state after the file-by-file copy.

(Solving this problem is why SSDs have the TRIM command, by the way.)

demolition 02 November 2016 13:39

Quote:

Originally Posted by robinsonb5 (Post 1120125)
If you do a file-by-file copy of a filesystem that's only 25% full, you will have written once to, say, 24% of the blocks, many times to 1% of the blocks (and wear-levelling will have spread those out), and 75% are still available for future wear levelling.

If you do a block-by-block copy of the same filesystem, you'll have written to 100% of the blocks only once, and have none at all (except for however many spare blocks the device has) available for future wear levelling.

This is why one should use ddpt instead of dd.

robinsonb5 02 November 2016 14:03

Quote:

Originally Posted by demolition (Post 1120126)
This is why one should use ddpt instead of dd.

Agreed - ddpt with write sparing is the optimal method.


All times are GMT +2. The time now is 05:42.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.

Page generated in 0.04414 seconds with 11 queries