Thursday, January 27, 2022

Diagnosing the DREM Floppy Emulator to work on the Convergent Technologies AWS!

Diagnosis dialogue to get the DREM MFM Hard Drive and Floppy Drive emulator to emulate a floppy drive (only) on all of our Convergent Technologies AWS Turbo machines.
For all below examples, for every test, we start with a floppy disk image containing all 00 bytes.  See linked example BlankAWSfloppy-80.dsk

All .dsk files can be opened and examined in any hex editor of your choice.  
We like HxD.

First pass means the AWS writes Hex byte value 5A across the entire floppy disk, from tracks 0 to 76.  There is no read-after-write so any errors are not found.  This is basically the AWS's version of "Erase".
So a floppy disk image which has had only a first pass will be a file consist of an entire file of 5A (which translates to the ASCII character "Z") from track 0 to 76 (which is byte 0 to byte 630747 in the file), leaving the original 00-value bytes from tracks 77-79 (bytes 630748 to 655359).
Your old version DREM software performs this part perfectly.   For this example, see linked file DREM-Old-Floppy-1st-pass.dsk
Using your latest DREM Software version for this same procedure, there are clearly errors (or drop-outs) where entire tracks are not written, about 20% of them are missing, leaving the original 00-byte contents of the file in those sections.  For this example, see linked file DREM-New-Floppy-1st-pass.dsk
Second pass means surface check.  This is after first pass is complete, and the AWS returns to track 0 and writes some very unique pattern of hex data to a track, and then immediately reads that track back and compares.  
If what is read on that track doesn't match what is written, then the AWS pauses, and tries again 3 or 4 times.  Then if it still does not match, the AWS produces an error saying either that it is a bad spot on the floppy, or that there is a floppy controller error.  But the cause is the same.  It's reading something different than it wrote, which means the write is failing.

On your old version DREM software, this surface check (second pass) works perfectly from tracks 0-42.  Only on 43 does the AWS report these errors on nearly every track.  For an example of this, see linked file  DREM-Old-Floppy-2nd-pass.dsk  Note that there is a very unique, long yet repeating (consistent and full) groups of written data from tracks 0-42 (bytes 0 to 352767), and then becomes hit-or-miss after that until the entire process exceeds the error-retry limit, leaving the 5A-value bytes for the rest of the file, as seen in the DREM-Old-Floppy-1st-pass.dsk file.
On your new version DREM software, this surface check (second pass) fails immediately on track 0, instead of track 43.  So you see hit-or-miss surface check pattern for only the first track or so. For an example of this, see linked file DREM-New-Floppy-2nd-pass.dsk
So after thinking about this myself, I think that the most logical way to tackle the problem is to add all of your new latest diagnostics to the old version of the DREM, and then see what write-timing from the AWS is different between tracks 42 and 43 on the second pass (surface check).  Then we can address the original problem, and hopefully solve for that, knowing that your old version has already solved for any problem found before that point.

---------------------------------------------------------------------

2022-01-28 Update:

> See attached resulting defaulta.dsk file.  You'll see the first drop-out at
> byte 65536, which is track 7.

This is Track 8 Side 0, corresponding log is:

06:19:36.15 A: MFM Read  TR:08 SD:0 L1-:07 L2->L1
06:19:36.42 A: MFM Write TR:08 SD:0 L1->L2 SEC:
IDLE / MOTOR OFF
06:19:36.43 A: MFM Seek  TR:08 > 08 SD:0 > 0
06:19:36.45 A: MFM Seek  TR:08 > 08 SD:0 > 1
06:19:36.82 A: MFM Write TR:08 SD:1 L1->L2 SEC:
Fri 28 Jan 2022 06:19:37 ERROR: track_write_nt: Invalid RLL bitstream
FMT-> 14 15 16 01 02 03 04 05 06 07 08 09 10 11 12 13
SEC#:16 FIRST SEC ID:1 INTERLEAVE:1 SIDE SKEW:14
SYNC:12 GAP1:48 GAP2:24 GAP3:49 GAP BYTE:0x4E INDEX AM:YES SYNC:0 GAP4a:81

Other errors are similar. The problem is the same: AWS does not care about the disk state it just using some hard codded time intervals for all operations. By track 8 accumulated time error is enough for DREM to miss one write operation.
Basically it is the same problem which is prominent during the long debug output.

In experiments before we have established, that AWS is disregarding the ready signal, so remaining control signals are index and read data. I'll check it against old version and let you know findings today afternoon.

---------------------------------------------------------------------

2022-02-01 Update:

π™…π™žπ™’ π˜Ώπ™§π™šπ™¬ — Yesterday at 10:57 PM
Ok... a couple of things... your assumption on the bitcell timing being longer on the upper tracks is wrong.  The bitcells are shorter.  In fact, it decodes perfectly using the Amiga's 880K timing.
...
Starting at track 45, head 1 is where actual data is being stored.  You can see the flux is something 'real' instead of generic blank sectors. 

π™…π™žπ™’ π˜Ώπ™§π™šπ™¬ — Yesterday at 10:59 PM
That alone can throw off HxC unless you widen the PLL window.

Forgotten Machines — Yesterday at 10:59 PM
How do I widen the PLL WIndow?

π™…π™žπ™’ π˜Ώπ™§π™šπ™¬ — Yesterday at 11:10 PM
πŸ™‚
Still some oddities, but I am pretty sure with some real tweaking you can get it perfect.  I just changed the PLL window to 25% and set the PLL max error to 1000ns

---------------------------------------------------------

From: Forgotten Machines
Date: Tue, Feb 1, 2022 at 7:22 AM

Oleksandr, 
....
Most importantly, Jim Drew, creator of the SuperCard Pro (and author of the .SCP standard low-level image format) did an analysis of the .scp image I captured called CT-AWS Real 3.5' Floppy Format Full Success.scp

He said:
π™…π™žπ™’ π˜Ώπ™§π™šπ™¬ — Yesterday at 10:57 PM
Ok... a couple of things... your assumption on the bitcell timing being longer on the upper tracks is wrong.  The bitcells are shorter.  In fact, it decodes perfectly using the Amiga's 880K timing.

Well, Oleksandr, it occurs to me that you've designed the DREM to accommodate all Amiga formats already...so is this really just as simple as adapting the TIMING from Amiga's 880K disk format, and the Convergent AWS will work as flawlessly on a floppy from tracks 43-76 as it does from tracks 0-42?  I do sincerely hope it could actually be that simple.

Further, Jim said:  "I just changed the PLL window to 25% and set the PLL max error to 1000ns.  [That cleared up pretty much all timing errors found by HxC]"


From: Oleksandr Kapitanenko via RT <info@drem.info>
Date: Tue, Feb 1, 2022 at 11:38 AM

AJ,

Thank you, no need to read on PC now. 1000ns = 1mks
DREMS PLL has been designed to handle up to 0.25T (theoretical) and about 0.21T (practical) where T is the bit cell length, which is 2000ns for 250k MFM
it is about 420ns, which in turn is much better when typical 250ns of most controllers. In fact I know only one Russian Agath computer which is using 2 stage pre compensation of 250 and 370 ns.

Now I need to think how to handle this new and rare case. Direct changes to PLL is not favorable. There is some logic for M2FM encoding. It is mixed FM+MFM mode with some half bit cell transitions during encoding switching.
I have no means to verify it in my lab :(

Also, at the moment I do not have an understanding how to implement 1000ns window on PLL. PLL works by centering pulses in the detection window, which is 2000ns for 250k MFM. 1000ns phase shift is bringing the pulse right on the edge of the window and makes it impossible to decide to which window pulse actually belongs...

Anyway, I'm taking timeout to research the problem. I'll be back soon.

--
Kind regards,
Oleksandr Kapitanenko     PortaOne, Inc.
www.portaone.com



From: Jim Drew
On Tue, Feb 1, 2022 at 3:23 PM 

I didn't try other values with the HxC controls.  It may be that 500ns works.  I am not sure how far off the bitcell timing is for the AWS format.  I know that it decodes perfectly when I use the Amiga's timing with my SuperCard Pro editor/analyzer software.

Converting flux to MFM is very simple if you break it down to the basics.  You have 3 different bitcell ranges of 4us/6us/8us for the ISO specification.  The Amiga (and apparently the AWS format) uses a slightly different timing so more data can fit (at least that is why the Amiga was done this way).  So, the Amiga uses roughly a 3.8us/5.8us/7.8us bitcell range to get 880K on a floppy as opposed to the normal 720K.  AWS (at least with tracks 43 to 77) is similar.  I did not look at the lower tracks.

You just need to associate a bitcell time with one of the 3 different standard times in order to convert that to a bit-packed stream.  A very simple method is to just go +/- 1us above and below the expected bitcell times, ie:

Anything 3.5us to 4.5us is considered 4us
Anything 5.5us to 6.5us is considered 6us
Anything 7.5us to 8.5us is considered 8us

You can extend this range to nearly reach the edge of each boundary if you want to take it further. I have found over the last decade that this method works well for recovering data where magnetic flux has shifted due to the span of time.

Once you have the bitcell in one of the 3 standard times, you can then associate that with a bit-packed stream value of 10, 100, 1000 (4us/6us/8us respectively).

Jim Drew, CBMSTUFF.COM (SuperCard Pro creator)



From: Forgotten Machines
Date: Tue, Feb 1, 2022 at 3:13 PM

Thank you, Oleksandr, much appreciated! I fully understand.

Meanwhile, I have only one question:
But indeed, your DREM does support Amiga 880k format timing?  Perhaps you can briefly explain how you see the difference between this Amiga 880k floppy timing and the AWS Floppy timing?

Thank you.

Best always,
AJ



From: Oleksandr Kapitanenko via RT <info@drem.info>
Date: Tue, Feb 1, 2022 at 3:56 PM

AJ,
 Amiga - AWS - whatever = 250k MFM
Only problem is AWS pre comp. my guess you need nice OLD real FDD in order to
have reliable functionality. FDD will introduce -200ns shift and then 800ns will be good for reliable PLL detection in a window less than 1000ns.

Modern 3"5 FDD's will have virtually no phase shift, this may cause intermittent errors on AWS reading.

DREM is receiving signal directly from AWS , so this is presenting unusual challenge.

--
Kind regards,
Oleksandr Kapitanenko     PortaOne, Inc.
www.portaone.com



From: Jim Drew
Date: Tue, Feb 1, 2022 at 5:22 PM

AJ,
>   Amiga - AWS - whatever = 250k MFM

Not really.  The data rate is set by the bitcell timing.  The Amiga does not use 250K MFM.


> Only problem is AWS pre comp. my guess you need nice OLD real FDD in order to
> have reliable functionality. FDD will introduce -200ns shift and then 800ns will be good for reliable PLL detection in a window less than 1000ns.

Precomp is ONLY used for writing, and let me let you in on a little secret - it's not necessary.
The Amiga has write recomp on tracks 40 to 79 and if you turn it off (which nearly every disk copier
does) there is zero difference.  SuperCard Pro doesn't support pre-comp for writing either.

> Modern 3"5 FDD's will have virtually no phase shift, this may cause intermittent errors on AWS reading.

There is absolutely no difference between the signals on a 5.25" and 3.5" disk drive in terms of
flux reversals to output pulse.


> DREM is receiving signal directly from AWS , so this is presenting unusual challenge.

I am not sure why there is a problem.  If this device supports Amigas it has to work with the AWS.

-Jim





No comments:

Post a Comment