Really big ramdisk: a bad idea?

Kevin O'Gorman kevin
Mon May 17 11:37:32 PDT 2004


On Thu, 12 Sep 2002, Net Llama! wrote:

> Before you go this route, are you sure that the bottleneck is memory?  If
> you look at the output from free, is it all consumed?  What about in top,
> how much of the memory is it using?

It stands to reason.  The CPU is 90% free overall, so something is 
blocking progress besides the CPU.  What could it be but disk access.
The easiest way to avoid disk I/O is to not do any, by doing it in RAM.

Am I missing something?

++ kevin

> 
> On Thu, 12 Sep 2002, Kevin O'Gorman wrote:
> 
> > I've got a database build that runs for 12 hours on my old machine, and
> > I have to do this more often than I like.  It's getting longer too.
> > The database is gdbm, there are about 2 million records of roughly
> > 100 bytes each (average; most are shorter, a few are longer).
> > While it runs, top(1) reports the process using less than 10% of
> > the CPU, so I assume the bulk of the time is in paging the database.
> > The current machine is maxed out at 256MB RAM, and the database is
> > itself about that size, so it's not all fitting in the Linux buffer
> > cache.
> >
> > I'm getting a machine online with 2GB DDR RAM, and I'm thinking of
> > performing this build in a really big RAMDISK.  Does this make sense?
> > First of all, can you build and tear down such a thing without needing
> > a reboot?  Second, do accesses to the ramdisk get put in the Linux
> > buffer cache, and would this cause a problem?  Can anyone see a
> > reason this shouldn't work and speed up the build roughly 10x?
> >
> > ++ kevin
> >
> >
> 
> 

-- 
Kevin O'Gorman, PhD  (805) 650-6274  mailto:kevin at kosmanor.com
Permanent e-mail forwarder: mailto:Kevin.O'Gorman.64 at Alum.Dartmouth.org
Permanent e-mail forwarder  mailto:kogorman at umail.ucsb.edu
Web: http://kosmanor.com/~kevin/index.html



More information about the Linux-users mailing list