Overview Features Coding ApolloOS Performance Forum Downloads Products Order Contact

Welcome to the Apollo Forum

This forum is for people interested in the APOLLO CPU.
Please read the forum usage manual.
Please visit our Apollo-Discord Server for support.



All TopicsNewsPerformanceGamesDemosApolloVampireAROSWorkbenchATARIReleases
Performance and Benchmark Results!

The Known AMIGA IDE / FLASH Problem

Gunnar von Boehn
(Apollo Team Member)
Posts 6207
03 Jan 2018 12:17


AMIGA OS has a IDE / FLASH wear out problem.

The IDE driver coming with AMIGA OS 3.x and OS 4.x
will split write commands into individual micro writes of each 512 Byte.

This means if the file system does a write of e.g. 64 KB
The IDE driver will split this into 128 Write commands each 512 Byte.

Modern Flash drives do internally not have blocks of 512 Byte size.
Those drives have internally bigger blocks of e.g. 4 KB or 8 KB.
They emulate 512 Byte writes by internally loading a big block, modifying it, and writing it back.

So in theory every command of 512 Write - will result in internal
  LOAD OF 4 KB / ERASE of BLOCK / COMBINE of DATA / Write back of 4 KB Block.

Its obvious that AMIGA OS is unnecessarily  wearing out Flash drives.


Chris H

Posts 65
03 Jan 2018 12:52


So it does not matter which block size you use to prepare a drive / partition. Do you have a solution to address this problem?


Gunnar von Boehn
(Apollo Team Member)
Posts 6207
03 Jan 2018 12:59


Chris Holzapfel wrote:

So it does not matter which block size you use to prepare a drive / partition.

The blocksize is honored by the filesystem - 1 layer above.
But the IDE driver (AMIGA OS scsi.device), one layer below will split the access into myriads of 512 Byte writes.

Chris Holzapfel wrote:

Do you have a solution to address this problem?

Yes IDE does allow bigger transfers - command for this are available in the IDE/ATAPI spec - they just need be used.
AMIGA SCSI device needs be updated for this.

The bonus is - VAMPIRE FastIDE does support ~ 26 MB/sec transfer
rate - but this is not reached yet with 512 Byte micro accesses.
So if the driver would not split the access into micro access the speed of the AMIGA IDE could also get much faster.


Chris Dennett

Posts 67
03 Jan 2018 13:17


Why not just change the machine code of the driver in the same way setpatch might work, looking at what version has been made resident and then changing the block splitting value (or because the buffer may be preallocated in stack, lots of other memory addresses need budging up in memory?)


Roman S.

Posts 149
03 Jan 2018 16:09


Gunnar, did you try to contact Thomas Richter and Olaf Barthel about this issue? They are working on AmigaOS update, AFAIK it will also cover the scsi.device.
 
[edit1] Does this problem happen also with unofficial scsi.device patches? There are several available:
- SpeedyIDE patch (see BlizKick package) - if you build own Kickstart using Remus, you can apply it easily
- scsi.device 43.45 by Chris Hodges
- scsi.device 43.47 by Cosmos
- scsi.device 46.1 by DonAdan
 
[edit2] Why not focus on the Vampire internal SD card reader and leave the IDE port just for compatibility with older storage? IDE is basically... dead. Morally, ethic'lly, spiritually, physically, positively, absolutely, undeniably and reliably... well, dead :)


Stefan Niestegge

Posts 33
03 Jan 2018 22:42


I think IDE is not that dead... You can get an IDE/SATA adapter and use a standard HDD or SSD. Those have caches which prevent unneccessary wear of the flash memory. Also they have wear leveling so it will last much longer than a CF card.


Mo Retro

Posts 241
03 Jan 2018 23:31


Stefan Niestegge wrote:

I think IDE is not that dead... You can get an IDE/SATA adapter and use a standard HDD or SSD. Those have caches which prevent unneccessary wear of the flash memory. Also they have wear leveling so it will last much longer than a CF card.

CF cards have wear leveling, because they have an onboard controller (IDE, data or pci -expres) but SD not. So SD is prone to wear!


Thierry Atheist

Posts 644
04 Jan 2018 00:57


Mo Retro wrote:
CF cards have wear leveling, because they have an onboard controller (IDE, data or pci -expres) but SD not. So SD is prone to wear!

My favourite memory card format is Compact Flash.

The thing that I really like about "regular sized" SD cards, though, is the write protect switch. That is a really important thing to have available.


Sean Sk

Posts 488
04 Jan 2018 02:58


John William your post is irrelevant for this discussion. If you want to discuss emulation vs real thing, please start another thread. I'm not as excited about the idea of my equipment wearing out as you obviously are, so yeah I'd like to try and be as immune as possible. I'd like it to last as long as it can, hence why Gunnar raised this issue in the first place. Also, as had been mentioned, CF cards aren't as prone to wearing out as SD Cards as they heave built in wear leveling.


Gunnar von Boehn
(Apollo Team Member)
Posts 6207
04 Jan 2018 03:04


sean sk wrote:

I'd like it to last as long as it can,

I fully agree with you.

If there is something we can do to increase life time of our drives then we should do it.
And if we as side effect even get a lot more speed. Perfect!



Simo Koivukoski
(Apollo Team Member)
Posts 601
04 Jan 2018 04:40


How this goes in hardware level? Most memory cards reports an erase block size of 4 MB or 4194304 bytes. Writing under this they always refresh a full 4 MB on memory card.

Memory card reader can report their preferred erase block size directly in Linux. Just type:
  cat /sys/block/mmcblk0/device/preferred_erase_size

"Since the IDE interface is a 16 bit bus this requires a 15 bit count
within the IDE software on the master drive. The typical drive uses a 16 bit
counter for fairly obvious reasons. Hence there is the 131071 byte transfer
limit. 130560 bytes or $1fe00 is the next smaller full block size."

IDE drives will corrupt data if you ask for more than 130560 bytes in one transfer.

This is adjusted from HDToolBox File System settings and I think PFS3 AIO overwrites this value and uses always $1fe00 no matter what you type in there.

So File System will divide writing in parts anyway regardless of scsi.device?


Andy Hearn

Posts 374
04 Jan 2018 11:38


I thought amiga would have been easy on flash based drives. I figured it was "write light".

only when it was actually writing did it worry me (but not too much). IE, to write a 4Kb block in 512b increments would potentially be 8 block read/erase/write-back cycles. I hoped that decent flash cards had enough buffer to stop that happening, even if there was no wear levelling on CF cards to spread the block writes across multiple blocks

but its not a journaled filesystem, so it's not punishing the disk with a constant write cycle/indexing etc. so I thought we'd be fine.

I have lost one kingston4GB and one "Speedy" no-name brand 4GB CF so far in my CF adventures in amigaland. Due I thought to just generic failure - I didn't suspect any flash death problems!

given power/space/heat/noise. i'll never run spinning rust drives again in my miggies. those are purely relegated into NAS. IDE is enough to plug a CF card into that's had ApolloOS written to it. and that'll do me. dumped all my optical media onto NAS so don't need CD drives.

however, I do still have an old IBM 512meg 2.5" drive as a "get out of jail free" card


Gunnar von Boehn
(Apollo Team Member)
Posts 6207
04 Jan 2018 12:02


Simo Koivukoski wrote:

  How this goes in hardware level? Most memory cards reports an erase block size of 4 MB or 4194304 bytes. Writing under this they always refresh a full 4 MB on memory card.
 

 
Two values are important:
A) page size, this is the smallest physical write size.
Typical page size is 4 KB or 8 KB
 
B) erase block size Pages are always erased together in erase block size.
 
 
If your filesystem does writes in smaller size than page size - then the drive will need to emulate the smaller writes by doing writes in read-modify-write fashion.

So if you have a page size of e.g. 4 KB and the DEVICE driver does also do all writes in aligned 4 KB chunks - then all is perfect.

If your device driver writes in smaller chunks than page-size - like AMIGA OS does, then you have write amplification - with the risk to lower lifetime.

 


Gunnar von Boehn
(Apollo Team Member)
Posts 6207
04 Jan 2018 12:34


Regarding IDE/ATA/ATAPI/SATA

From the "instruction-set" and protocol, these devices are the same, are compatible.
This means you can use all those devices on AMIGA with the AMIGA OS IDE driver.

There are different physical connectors used by different devices.
Many adapters are on the market which allow using all these devices in your amiga.

Now lets go back to the topic of the thread.

The topic of thread is a "software limitation" in the AMIGA OS IDE device driver.
This software limitation can be fixed by a programmer.
With a small software update, people will get
* improved performance
* reduced write amplification = extended device life time
 
No new hardware needs to be bought by people for this.



Simo Koivukoski
(Apollo Team Member)
Posts 601
05 Jan 2018 07:33


Toni shared some his experiences on EAB:

I remembered (was reminded by seeing something somewhere..) one important CF related feature that can make CF appear slower than normal hard drives. I noticed this when I was working with aca500plus scsi.device optimizations.

For some reason CF cards either don't support PIO multiple transfer mode at all or only up to max 4. (At least I haven't seen larger values)

Multiple transfer mode: number of 512 byte blocks that can be transferred in single pass, without generating interrupt and/or need to wait for data request ready.

Normal A600/A1200/A4000 scsi.device uses multiple transfer up to 16 if drive supports it. (16*512=max 8k can be transferred in single pass without waiting or interrupts)

Low multiple transfer value (or zero) means device driver needs to Wait()'s for interrupt handler to Signal() the task after every 512 bytes (if zero multiple transfer, or 2048 bytes if 4). This can cause very high overhead, especially with CPUs that have no caches or very small instruction cache.

I handled this in aca500plus by disabling IDE interrupts during transfer and when driver was going to wait for interrupt, I chose to simply poll for drive status register until BSY is inactive and DRQ is active. This increased transfer rate (if I remember correctly) almost 1MB/s more in fast 68000 mode.

Interrupt waiting probably also causes slow down even when using non-ssd modern drives because they have huge buffers (vs normal Amiga file size), are internally much faster than Amiga and device waits for first interrupt anyway (send read command, wait for interrupt, transfer all data)

NOTE: this has nothing to do with max transfer.


EXTERNAL LINK


Mister Cartoonmonkey

Posts 57
17 Jan 2018 06:11


It would be so cool if some sort of built in filesystem support with this fix would arrive in the Vampire A1200.

posts 16