file too large
Jeff Harrison
jeffaharrison at yahoo.com
Thu Feb 3 14:00:36 PST 2011
From: "scooter6 at gmail.com" <scooter6 at gmail.com>
>
>To: Jeff Harrison <jeffaharrison at yahoo.com>
>Cc: filePro Mailing List <filepro-list at lists.celestial.com>
>Sent: Thu, February 3, 2011 4:45:39 PM
>Subject: Re: file too large
>
>
> Well, there are over 8 million records in this file - so I was looking for a
>bit quicker of a solution to get this resolved
> I can't wait to get this system on a CentOS server - SCO is soooooo dang slow
>haha
> I'm currently looking to export the records to a few csv files, then delete
>key & data for this file and import only the most
> recent 6 months or so and then resolve the rest when I get back next week from
>
>vacation....
> Why do these things happen the day before I'm leaving town? haha
>
> So, to clarify - is this a SCO OpenServer 5.0.5 issue, or is this a filePro
>error? Meaning, if I upgrade tonight to 5.6.10 would this
> resolve the issue?
>
>
This is an OS limitation - I believe there is a way to expand the limit in the
OS - perhaps a Unix Guru will speak up?
Not sure why you would need to resort to CSV files import/export - just archive
the recent 6 months that you need to a file with a duplicate map, then rename
your original key and data to hldkey hlddata or something like that, and then in
the OS copy the key/data with the recent only info back to the original
location. And then rebuild your indexes.
If you want it to go faster, you can remove the "delete" when you archive - just
remember that you will need to go back and remove those records later.
Jeff Harrison
jeffaharrison at yahoo.com
Author of JHImport and JHExport
More information about the Filepro-list
mailing list