Feasibility Study for Potential fP Software
John Esak
john at valar.com
Sat Sep 16 15:13:26 PDT 2006
<Keith W's post> <--- What he said... :-) :-)
John
> -----Original Message-----
> From: filepro-list-bounces+john=valar.com at lists.celestial.com
> [mailto:filepro-list-bounces+john=valar.com at lists.celestial.com]On
> Behalf Of Keith' Weatherhead
> Sent: Saturday, September 16, 2006 12:32 PM
> To: filePro Mailing List
> Subject: Re: Feasibility Study for Potential fP Software
>
>
> Fairlight wrote:
>
> > Hi Everyone,
> >
> > I'm trying to do some market research for a potential
> ***filePro-related***
> > product. It can't get much more filePro-related, so I'm not considering
> > this off-topic. I would appreciate a few moments of anyone's time who's
> > willing to participate.
> [most of the question deleted]
> > I appreciate your time, thanks!
> >
> > Bests,
> >
> > Mark Luljak
> > Fairlight Consulting
>
> Mark,
>
> This would be a great project, however I think you will find it
> mostly non-profitable, at least at the beginning and you will really
> have to decide that you are doing it for the "bragging rights" of
> saying you did it, before really looking for any "true" compensation.
>
> That said, here is my take based on 10 years of mainframe experience
> in a distributed environment that touched five contintents on a
> private network of better than a quarter million users where we
> could control things that cannot be controled on the INet.
>
> First, you would need to build the transaction logging system.
> To-date, as far as I know (thru 5.0.14) there is no @delete. This
> would help greatly.
>
> Second, there is not enough granularity in the @time return. You
> will need hundredths if not thousands of a second to resolve
> transaction contention issues, for the order of application. I had
> started doing what I am laying out below and ran into issues. I had
> requested this time granularity, but I am not aware of it ever being
> implemented.
>
> You will need to develop a few transaction codes that your system
> will use and what has priority if two have the exact same. Simple
> codes such as:
> N = New Record
> C = Change
> D = Delete
>
> ClusterID (Host Code)
> ApplicationID (Allowing more than one Application per Host)
> The User Name (Regardless of platform)
> Station Address (tty, or IP address)
> Date (yyyy/mm/dd)
> Time (hhmmss###)
> Reserved Space for additional controls
> FileName
> Qualifier
> Entire Record's Data w/ 20 byte header
>
> You will have to impose, in your transaction system, the maximum
> record size to be how ever much these things (with the exception of
> the Data Record) less than the current record limits. Technically,
> you will also want to preserve the current 20-byte control area as
> part of the Data Record, outside of your transaction control stuff
> as you will need your stuff to control the transaction logs and make
> it possible to cross platforms without loss of either your control
> information of FP's control information.
>
> Now, from my experience, there are dsome things that just have to
> work and be usable before really embarking on clustinering.
>
> Perfect a Transaction Logging and Recovery System ! ! !
>
> You should be able to do the following.
>
> 1. Put out a test database.
> 2. Back up the test database.
> 3. Process a series of transactions, adds, updates, deletes, etc.
> 4. Backup the updated database.
> 5. Resore the backup from step 2.
> 6. Use the Transaction Log from Step #3 with your Recovery tool.
> 7. Backup this recovered database.
> 8. Choose method of puttng databases, from steps #4 and #7,
> side-by-side and doing a binary bit-level.
>
> Now if this passes. Add the the recovery tools to extract or remove
> updates from users or processes. Why, if an errant process was
> done, with a transaction recovery tool you should be able to go to
> the most current backup and restore the database and reapply all
> transaction logs from the backup to the pouint of desired recovery,
> EXCEPT, for the errant transactions and essentially remove the
> effects of a bad process.
>
> At this point you will have a product that is sellable and worth a
> lot to a large scale FP site, that cannot afford big outages and has
> to meet other data integrity issues that FP alone does not truely
> meet today.
>
> ****
>
> Second Phase - II
>
> Now in a multi-hosted environment you will need a way to sync times
> and time deltas in order to handle either recoveries OR data
> synchs. This is where GMT offsets and a differential time field may
> need to be added to the tranasction logs.
>
> You could issue a transaction sequence ID, would have to be
> alphanumberic to be great enough to not run out within too short a
> time window. If you were going to try and meet an update synch of
> less than say 5 mins I think I would do the following.
>
> Have the application program write the transaction log entries with
> two spare fields to allow another process to flag whether is has
> queued this entry to be sent to the other hosts (local or remote).
>
> I would have a process that traverses the transaction log (keeping
> track of the point that it has currently processed thru looking for
> another entry to have been recorded.
>
> Have it record in its own tracking file the transactionID and which
> hosts it has successfully transferred the transaction too.
>
> Once it has been acknowledged by the other hosts (all that the
> trnasction had to be sync's with) it should mark the official
> transaction log that the transaction had been completed (that 2nd
> spare field from above)and move an archive copy of the transactionID
> to a history file. This information would be needed in the event of
> a recovery that had to remove tranactions and would have to sync the
> other hosts in the process.
>
> I would havve a process listening on each host for sync records from
> other hosts and posted sync records (from that current host) to the
> other hosts, you would then only need the database application tool
> to be running on the incoming side to read up transactions from an
> EXTERNAL transaction log and apply them to the local copy of the
> databases.
>
> Due to a record being added on one host and then deleted from a
> different host, this is where the time differentials would be very
> tricky. Also, record numbers would be totally worthless as Record
> 50 in FILEA could be being created, at the exact same instant, by
> two different users, on different hosts, for different purposes. As
> such every record MUST have a unique-indexed key, period. Once
> could not afford to have the database updater (you should now see
> that the recovery tool, created in phase one is this process) doing
> a scans of the database to find a record to update or delete.
>
> Finally you would need, in the transaction logs, to have the ability
> too do the following.
> From the (implementation or backup) point. For a given file, given
> (record number or field key) produce a log of all pertinent tracking
> information, such as: Transaction ID, date, time and userID. If you
> need specifc data, you would go back to the appropiate transaction
> logs, if still in retention (another issue entirely) and using the
> recovery tool, either recovery or dump (to file or printer) the
> contents you are trying to examine.
>
> At this point you have abilities that rival major databases in the
> Oracle and DB2 rehlms, yet with a light weight character base I/F.
>
> I would be happy to contine this discussion as it is a neat project.
>
> If you could get FP-Tech to add an @delete ane higher time
> granularity your job would be much easier than it will as of today.
>
> Best of Luck !
>
> Keith
> --
> - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> Keith F. Weatherhead keithw at ddltd.com
>
> Discus Data, LTD Voice: (815) 237-8467
> 3465 S Carbon Hill Rd Fax: (815) 237-8641
> Braceville, IL 60407 Nat'l_Pager: (815) 768-8098
> - - - - - - - - - - - - - - - - - - - - - - - - - - - -
>
> _______________________________________________
> Filepro-list mailing list
> Filepro-list at lists.celestial.com
> http://mailman.celestial.com/mailman/listinfo/filepro-list
More information about the Filepro-list
mailing list