file too large

Brian K. White brian at aljex.com
Thu Feb 3 14:29:07 PST 2011


There is no way to increase the 2G file size limit in the OS.

What you CAN do in the case of filepro, if archiving isn't good enough, 
say you just plain need more records alive than fits in 2G, is you can 
use filepro's extents feature.

You make a new keyx1 or, or for qualifiers keyx1qual
same for data and indexes, datax1 or datax1qual, indexx1.A or 
indexx1qual.A etc...

Then fp will transparently use these new x1 files as part of the 
original filepro file. You don't have to do anything weird in processing 
like for qualifiers. you just get to add and lookup twice as many 
records as if the file were twice as big.

You can add more extents, x2, x3, etc I don't know how far.

Each individual file can not exceed 2G, but the total can be n*2G and in 
fp you can mostly forget the 2G limit and pretendd you have a magic n*2G 
file.

There are some details about how to create and use extents I'm 
forgetting, since since moving to linux we haven't had to use them but I 
do have some notes somewhere. I think I might have actually posted all 
the key points here some time in the past. I don't know how well it's 
documented in the normal documentation but just knowing the capability 
exists and the overall scheme should allow you to figure out the rest 
easy enough with a little trial & error.

That's only if you can't archive for some reason remember. That is simpler.

-- 
bkw


On 2/3/2011 5:00 PM, Jeff Harrison wrote:
> From: "scooter6 at gmail.com"<scooter6 at gmail.com>
>>
>> To: Jeff Harrison<jeffaharrison at yahoo.com>
>> Cc: filePro Mailing List<filepro-list at lists.celestial.com>
>> Sent: Thu, February 3, 2011 4:45:39 PM
>> Subject: Re: file too large
>>
>>
>>   Well, there are over 8 million records in this file - so I was looking for a
>> bit quicker of a solution to get this resolved
>>   I can't wait to get this system on a CentOS server - SCO is soooooo dang slow
>
>> haha
>>   I'm currently looking to export the records to a few csv files, then delete
>> key&  data for this file and import only the most
>>   recent 6 months or so and then resolve the rest when I get back next week from
>>
>> vacation....
>>   Why do these things happen the day before I'm leaving town? haha
>>
>>   So, to clarify - is this a SCO OpenServer 5.0.5 issue, or is this a filePro
>> error? Meaning, if I upgrade tonight to 5.6.10 would this
>>   resolve the issue?
>>
>>
>
> This is an OS limitation - I believe there is a way to expand the limit in the
> OS - perhaps a Unix Guru will speak up?
>
> Not sure why you would need to resort to CSV files import/export - just archive
> the recent 6 months that you need to a file with a duplicate map, then rename
> your original key and data to hldkey hlddata or something like that, and then in
> the OS copy the key/data with the recent only info back to the original
> location.  And then rebuild your indexes.
>
> If you want it to go faster, you can remove the "delete" when you archive - just
> remember that you will need to go back and remove those records later.
>
> Jeff Harrison
> jeffaharrison at yahoo.com
> Author of JHImport and JHExport
>
>
>
> _______________________________________________
> Filepro-list mailing list
> Filepro-list at lists.celestial.com
> Subscribe/Unsubscribe/Subscription Changes
> http://mailman.celestial.com/mailman/listinfo/filepro-list
>



More information about the Filepro-list mailing list