That's a good trick! Thanks!<br>Tyler<br><br><div class="gmail_quote">On Tue, Feb 2, 2010 at 7:36 AM, Richard Kreiss <span dir="ltr"><<a href="mailto:rkreiss@verizon.net">rkreiss@verizon.net</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Tyler,<br>
<br>
My procedure for getting to an available record is simple.<br>
<br>
<br>
@once ◄ If: '******************************************************<br>
Then: '* get file name to import<br>
144 ------- - - - - - - - - - - - - - - - -<br>
◄ If: rn = ""<br>
Then: rn(8,.0)="1"<br>
145 ------- - - - - - - - - - - - - - - - -<br>
loop_rn◄ If:<br>
Then: lookup - r=rn -npw<br>
146 ------- - - - - - - - - - - - - - - - -<br>
◄ If: locked(-)<br>
Then: rn=rn+"1";GOTO loop_rn<br>
147 ------- - - - - - - - - - - - - - - - -<br>
◄ If:<br>
Then: END<br>
<br>
This process will walk through the file and get the first available non locked record. Note that the file I use for this process contains 1 field and all records are always blank. The only way a record in this file would be locked is when someone else is running a process that uses this file.<br>
<br>
I will use this file for report/clerk applications where clerk is used to get selection data based on the report being run.<br>
<br>
Since @once runs before and record is selected, you could run your processing from @once and not worry about locking a record. I do this for some imports as all the is happening is data is being read in and then posted to a new file.<br>
<br>
<br>
Richard Kreiss<br>
GCC Consulting<br>
<a href="mailto:rkreiss@gccconsulting.net">rkreiss@gccconsulting.net</a><br>
<div class="im"><br>
<br>
<br>
<br>
> -----Original Message-----<br>
> From: filepro-list-bounces+rkreiss=<a href="http://verizon.net" target="_blank">verizon.net</a>@<a href="http://lists.celestial.com" target="_blank">lists.celestial.com</a> [mailto:<a href="mailto:filepro-">filepro-</a><br>
> list-bounces+rkreiss=<a href="http://verizon.net" target="_blank">verizon.net</a>@<a href="http://lists.celestial.com" target="_blank">lists.celestial.com</a>] On Behalf Of John Esak<br>
> Sent: Tuesday, February 02, 2010 8:38 AM<br>
> To: 'Tyler Style'<br>
> Cc: <a href="mailto:filepro-list@lists.celestial.com">filepro-list@lists.celestial.com</a><br>
</div><div><div></div><div class="h5">> Subject: RE: rreport gagging on lockfile<br>
><br>
> Well, let's see. If you were calling the clerk or report from SSTEM, then<br>
> yes the lockfile would obstruct things first... But if you are already in<br>
> clerk, it won't and if you are in report (using the -u optino) it won't.<br>
> That's why I was looking for the exact way you launch this stuff. Maybe you<br>
> put it in the last note... I'm not sure. But in any cvase, assuming the<br>
> debugger does come up... It will be on the automatic table if there is<br>
> one... And the -z table otherwise. If you put the debug on command at the<br>
> top of the prc which does the system call, then you should be able to step a<br>
> line at a time to the system call... At which point you can check the<br>
> lockfile (before and after) the system command and see what happens.<br>
><br>
> Incidentally, how do you do a lookup to a "random" record to keep it from<br>
> "hogging" the file. Is it possible you are getting the record you are<br>
> standing on... Which actually is possible to get in "write" mode because<br>
> filePro knows you are the user doing the lookup. This might be causing the<br>
> hassle. I'm curious if you actually use RAND or what? The way I normally<br>
> do what I think you're doing is do a lookup free, write that record... Grab<br>
> the record number, then do a lookup - to that record number. When the<br>
> process is done, delete that record. It has always been the cleanest way<br>
> for me.<br>
><br>
> Also, let me re-trace back to the previous paragraph . You mention having<br>
> the file open with report on one screen and then clerk locks on another. The<br>
> report on the first screen does have the -u on the command line, right?<br>
> Otherwise, the clerk should rightfully be locked out.<br>
><br>
> John<br>
><br>
><br>
><br>
> > -----Original Message-----<br>
> > From: Tyler Style [mailto:<a href="mailto:tyler.style@gmail.com">tyler.style@gmail.com</a>]<br>
> > Sent: Tuesday, February 02, 2010 8:17 AM<br>
> > To: <a href="mailto:john@valar.com">john@valar.com</a><br>
> > Cc: <a href="mailto:filepro-list@lists.celestial.com">filepro-list@lists.celestial.com</a><br>
> > Subject: Re: rreport gagging on lockfile<br>
> ><br>
> > Yup, use the debugger all the time. But not sure exactly<br>
> > what I would<br>
> > be looking for while debugging? As far as I can tell, the<br>
> > error blocks<br>
> > all access to rclerk and rreport, so the debugger would likely never<br>
> > even start. I'll be giving it a whirl, tho.<br>
> ><br>
> > John Esak wrote:<br>
> > > Only thing I can suggest at this point is run the process with the<br>
> > > interactive debugger. Completely lcear the lockfile before<br>
> > starting. (I mean<br>
> > > erase it). Then step through each critical point until you<br>
> > can see exactly<br>
> > > what is causing the hang.<br>
> > ><br>
> > > Are you familiar with the debugger?<br>
> > ><br>
> > > John<br>
> > ><br>
> > ><br>
> > ><br>
> > >> -----Original Message-----<br>
> > >> From: Tyler Style [mailto:<a href="mailto:tyler.style@gmail.com">tyler.style@gmail.com</a>]<br>
> > >> Sent: Monday, February 01, 2010 11:23 PM<br>
> > >> To: <a href="mailto:john@valar.com">john@valar.com</a><br>
> > >> Cc: <a href="mailto:filepro-list@lists.celestial.com">filepro-list@lists.celestial.com</a><br>
> > >> Subject: Re: rreport gagging on lockfile<br>
> > >><br>
> > >><br>
> > >><br>
> > >> John Esak wrote:<br>
> > >> > 1. Okay, be more specific. You say you are using the lockinfo<br>
> > >> script. So, you can see exactly which record is being locked<br>
> > >> by exactly<br>
> > >> which binary. What does it show? Record 1 by dclerk, or<br>
> > record 1 by<br>
> > >> dreport.... exactly what does lockinfo show.... by any<br>
> > chance are you<br>
> > >> locking record 0? Not something you could do specificially,<br>
> > >> but filePro<br>
> > >> does this from time to time.<br>
> > >> While I have the error message from rreport on one terminal<br>
> > >> and the same<br>
> > >> error message from rclerk on another, lockinfo will produce<br>
> > >> "There are<br>
> > >> NO locks on the "log_operations" key file."<br>
> > >><br>
> > >> While every call to rreport starts off with -sr 1, there is a<br>
> > >> lookup -<br>
> > >> in the processing that moves it to a random record (between 1<br>
> > >> and 180)<br>
> > >> as the first command to keep it from hogging the file.<br>
> > Records 1-180<br>
> > >> all exist.<br>
> > >><br>
> > >> > 2. It's always easier when people say this has worked for<br>
> > >> years. So,<br>
> > >> it must be something new added to the soup. Have you removed<br>
> > >> an index,<br>
> > >> grown a field and not changed the size an index pointing<br>
> > to it. Gone<br>
> > >> past some imposed time barrier? Used up too many<br>
> > licenses? Exceeded<br>
> > >> some quota in some parameter? Added groups or changed<br>
> > >> permissions? Run<br>
> > >> a fixmog (fix permissions)? Has a binary failed like dclerk<br>
> > >> and you've<br>
> > >> replaced it with a different copy? Has the -u flag any<br>
> > >> immpact on your<br>
> > >> scenario? I'm assuming a lot because you haven't<br>
> > >> specifically shown how<br>
> > >> you are doing things? Is this happening from a system call?<br>
> > >><br>
> > >> Absolutely nothing has done to change the file or the<br>
> > >> processing for a<br>
> > >> couple years. The only thing that has happened to the file<br>
> > >> is that it<br>
> > >> has grown larger over time.<br>
> > >> There is definitely no time limit imposed in the processing;<br>
> > >> I don't see<br>
> > >> how would that produce a lock issue, anyway?<br>
> > >> We have way more licenses than we can use after cutting 70%<br>
> > >> of our staff<br>
> > >> last year :P<br>
> > >> Exceeding a quota in a parameter would mean something had<br>
> > >> changed with<br>
> > >> the file or processing, and nothing has.<br>
> > >> We haven't changed groups or permissions in years either -<br>
> > >> the current<br>
> > >> setup is pretty static.<br>
> > >> Fixmog (our version is called 'correct') hasn't been executed<br>
> > >> in months<br>
> > >> according to the log it keeps.<br>
> > >> No binaries have been swapped in or out (we'd like to tho!<br>
> > >> still haven't<br>
> > >> got 5.6 to pass all our tests on our test box unfortunately)<br>
> > >> -u shouldn't make any diff; it's not used and if we needed to<br>
> > >> use it I<br>
> > >> am certain the need would have shown up sometime prior to this.<br>
> > >><br>
> > >> A typical use would be to add this to the end of a bash<br>
> > >> script to record<br>
> > >> that a script had completed running:<br>
> > >> ARGPM="file=none;processing=none;qualifier=hh;script=importshi<br>
> > >> p;user=$LOGNAME;no<br>
> > >> te=none;status=COMPLETED"<br>
> > >> /appl/fp/rreport log_operations -fp log_it -sr 1 -r $ARGPM -h<br>
> > >> "Logging"<br>
> > >><br>
> > >> Most of the actual processing just parses @PM, looks up a<br>
> > >> free record,<br>
> > >> and puts data in the correct fields.<br>
> > >><br>
> > >> No other processing anywhere ever looks up the file; it is<br>
> > strictly a<br>
> > >> log, nothing more, and the only processing that touches it<br>
> > (log_it)<br>
> > >> always either run via a script command or a SYSTEM command.<br>
> > >><br>
> > >> Things we tried to see if they would help:<br>
> > >> * file had 600,000 records going back 4yrs, so we copied<br>
> > the data to<br>
> > >> another qualifier, deleted the original qualifier, and<br>
> > copy back the<br>
> > >> most recent 10,000 entries to see if it was just a size issue.<br>
> > >> * rebuilt all the indices.<br>
> > >> * rebooting the OS.<br>
> > >><br>
> > >> This logging hasn't been added to any new processing or<br>
> > scripts for<br>
> > >> several months.<br>
> > >><br>
> > >> ><br>
> > >> > I agree that the code would not seem to be importatn<br>
> > since it has<br>
> > >> worked... before, so again, it seems like the environment<br>
> > has changed<br>
> > >> somehow. Maybe if we saw the whole setup, relevant code<br>
> > and all we<br>
> > >> could give more suggestions. Oh, I just thought of one... is it<br>
> > >> possible you are looking up to a particular record, say<br>
> > >> record 1... and<br>
> > >> that record is not there anymore?<br>
> > >><br>
> > >> All the records being looked up to exist. The environment<br>
> > is pretty<br>
> > >> static - our needs have been pretty clearly defined by<br>
> > this point and<br>
> > >> new systems are almost always implemented on our Debian boxes<br>
> > >> as SCO is<br>
> > >> so limiting and so badly supported.<br>
> > >><br>
> > >> Thanks for the ideas! Hopefully my answers might light up a<br>
> > >> bulb over<br>
> > >> someone's head...<br>
> > >><br>
> > >> Tyler<br>
> > >><br>
> > >><br>
> > ><br>
> > ><br>
> > ><br>
> ><br>
><br>
</div></div>> _______________________________________________<br>
> Filepro-list mailing list<br>
> <a href="mailto:Filepro-list@lists.celestial.com">Filepro-list@lists.celestial.com</a><br>
> <a href="http://mailman.celestial.com/mailman/listinfo/filepro-list" target="_blank">http://mailman.celestial.com/mailman/listinfo/filepro-list</a><br>
<br>
<br>
</blockquote></div><br>