user command

Brian K. White brian at aljex.com
Tue Sep 19 13:28:45 PDT 2006


----- Original Message ----- 
From: "Fairlight" <fairlite at fairlite.com>
To: "filePro Mailing List" <filepro-list at lists.celestial.com>
Sent: Tuesday, September 19, 2006 2:33 PM
Subject: Re: user command


> Yo, homey, in case you don' be listenin', Richard D. Williams done said:
>> Linux, FP 5.14
>>
>> I am trying to get a unique number by executing this script called 
>> uniqueid:
>>
>> temp_file="/tmp/$$"
>> echo $temp_file
>>
>> This is my result from the command line:
>> # uniqueid
>> /tmp/23124
>> #
>>
>> My filepro user command syntax is:
>>
>> uniq = /usr/local/bin/uniqueid
>> ha=uniq
>> msgbox ha
>>
>> This is my result from filepro @key:
>> (the graphics did not cut and paste very well, but you get the idea.)
>>
>> Any suggestions as to what I am doing wrong?
>>
>> Richard D. Williams
>
> Blank field, from the look of it.
>
> The first line should be:
>
> user uniq = /usr/local/bin/uniqueid

correct.
This is not a very good thing to do though.
It's ok efficiency wise but it's prone to problems.

If this is used in clerk processing that might not exit quickly, like input 
processing that stays running all day as users move from record to record or 
add new records, this will produce lots of zombie background processes that 
can use up the process stack and make the whole server actually fail to fork 
new peocesses.

One answer is to make the script loop so there is always one uniqueid 
process per clerk process thatr uses it, but never more than one.
If it's used in a report you really REALLY want to make the script loop and 
don't have any "close uniq" statements anywhere. Even then it's too delicate 
and prone to locking up if you try to read when the script is not writing or 
vice/versa.

A more robust answer is to use system and a temp file to get your result. 
This is not as bad as it sounds since this kind of tiny 
create-write-read-destroy would all happen fast enough that it would all 
happen in cache and rarely actually make it to the physical disk.

The best answer is probably not to use any external process just to get a 
unique value.
It would be nice if fp had pid and ppid functions or a unique function like 
apache's mod_unique
But at least there is rand() and @tm and @rn and @fi and @id and @qu which 
can be combined to make pretty unique strings.
also I find it handy to put something in /etc/profile that sets a variable 
MYTTY
TTY and tty are built in to some shells and sometimes can't be overwritten 
and sometimes can be overwritten but lose their value back to what the shell 
wants in any child processes, so, MYTTY, or maybe SESSID (session id).
Then you can include getenv("MYTTY") in there.
  MYTTY=${tty##*/}
This uses bash's built in variable tty and strips off everything up to and 
including the last "/"
For typical ssh /dev/pts/NNN this results in just "NNN" which isn't quite 
good enough for uniqueness really. Ideally I should remove all slashes and 
remove "dev" and keep the rest, that would be garunteed to be unique from 
any other tty. But in my case I happen to know I want just the final part.
For other shells and other OS's you can also do it like this:
If your shell has TTY built in:
  MYTTY=${TTY##*/}
If your shell does not have TTY built in:
  MYTTY=`tty` MYTTY=${MYTTY##*/}
just like that, no semicolon all on one line
of course whichever way you aquire MYTTY, always export MYTTY after, or 
before, it doesn't matter.
Or you could do SESSID=$$ ; export SESSID

That would give you the parent pid of all processes that session runs which 
would be unique among concurrent sessions but not unique between proesses 
that are part of the same session. Call tables and system commands would all 
have the same SESSID as their parents. The same is true for MYTTY though. 
Thats why I always use rand() also.

I have also found it necessary to make a heavily used variable COMPANY that 
all kinds of things hinge on, which is similar to PFQUAL or @qu except 
qualifiers can be and are used for all sorts of uses besides the simple case 
of a seperate companies or branches data set across the board.  So I often 
have getenv("COMPANY") in there too.

Even a long string of junk like this is better than user or system for both 
efficiency and for robustness.
tf = getenv("PFTMP") { "/" { @qu { "_" { @fi { "_" { @rn { "_" { @id { "_" 
{ getenv("MYTTY") { "_" {  doedit(@t4,"yymd") { xlate(@tm,":","") { ".pcl"
or
tf = getenv("PFTMP") { "/" { @qu { "_" { @fi { "_" { @rn { "_" { @id { "_" 
{ getenv("MYTTY") { "_" {  rand() { ".pcl"

Note that for rand() you need to put x = rand("-1") in @once in every table 
that will use rand() or rand("") (older versions need the "")
Otherwise every time your table runs rand() it always comes out with the 
same value the first time, the same different value the next time, etc...

Another advantage to this type of temp file name is it's informative. When 
you are debugging it's so much easier to find the source of the problem when 
you know exactly where and when and who the file came from. Or by the stuff 
thats missing, like if theres nothing where PFTMP or @qu should be etc...

Maybe thats a good call table library routine, something that emulates 
apache's mod_unique?
With processing we can make even more securely unique values by incrimenting 
our own counter.
You'd just use it like this:
  declare extern UNIQ ; call "lib/uniq" ; blah = blah blah blah UNIQ blah...

My systems all have lots of users and lots of seperate companies and 
companies with qualifiers for individual people in some cases and we do a 
lot of temp files to web browser / email / print to pdf / print / fax / cgi 
scripts that run fp / external utils like mileage calculators / edi 
transactions / etc... so temp file handling has been something I've had to 
keep improving over time as we run into cases where pretty good turned out 
not to be good enough. PID and PPID and UNIQUE_ID ala apache would really be 
helpful built-ins.
So would time granularity that goes as far as the underlying system goes, as 
Mark and others have noted with the transaction layer discussion, maybe with 
an additional index tacked on to that such that if the underlying OS only 
goes to say .01 second, the improved @TM or time() or whatever would make a 
promise that it will add digits of its own and incriment it itself if 
necessary.
So if you ran time() 5 times in the same .01 second, time() would return
hh:mm:ss.01000
hh:mm:ss.01001
hh:mm:ss.01002
hh:mm:ss.01003
hh:mm:ss.01004
Just simply incrimenting the final 3 digits not really trying to estimate 
how much to incriment it to match real life.
We could do that ourselves with a call table too
In that case sice @tm only goes to the full second, and since the call table 
would be system-wide, we'd want to add something like 4 or 5 extra digits of 
counter bcause how many times can even a crappy old machine run a tiny call 
table like that in a second? Possibly thousands even though it would 
probably have to do protected lookups to a control file. And a beefy machine 
running linux?

Brian K. White  --  brian at aljex.com  --  http://www.aljex.com/bkw/
+++++[>+++[>+++++>+++++++<<-]<-]>>+.>.+++++.+++++++.-.[>+<---]>++.
filePro  BBx    Linux  SCO  FreeBSD    #callahans  Satriani  Filk!











More information about the Filepro-list mailing list