On Mon, 2006-07-10 at 01:22 -0700, Cosmo Nova wrote:
> I am working on a digital video recorder. The system is linux based and
> there are 16 video sources. The video sources write data the data disk
> syncronously. Once it is filled up, there is a recycle mechanism which will
> remove the old video files and free up new space. As you can imagined, there
> will be serious external fragmentation problem as time passes.
> I was told that jfs and xfs can do much better than ext3, to tackle the
> fragment problem, so I conducted a few benchmark tests and found that jfs is
> doing excellent job. The findings is not strong enough to persuade my boss
> to change, and hence I've been reading the rationale and source code behind
> jfs. Here I summarized two major questions that can help me explain jfs's
> 1. when I write the first byte or open a file, what will jfs do? cuz my
> findings is that, the 16 channels create files of size around 32MB. They
> grow in size of course, but majority fragment or number of extents I found
> is only ONE...
> according to ur disscussion with Peter, jfs allocates one page to a file at
> a time. and this allocation is locked under one allocation group. the page
> size according to jfs_filesys.h is 4096. You said the allocation would be
> allocated but not recorded (ABNR),
No, jfs doesn't use the ABNR blocks currently. This is something the
OS/2 supported, but the linux implementation will not create abnr
> which raised two subquestions:
> 1a. is those ABNR blocks stored temporary in memory, 16 files on grow and
> towards 32MB, it is a huge memory requirement. is it really that everyone
> stored in memory and flushed to the disk at file close?? what is the jfs
> memory requirement then?
The blocks are allocated when the data is written. The data may be
stored in memory for a while, but can be written to disk at any time.
> 1b. since only one file is allocated in one allocation group (AG), then how
> many AG is there in ur disk when it's formated (mkfs)?
There are typically, somewhere between 65 and 128 AG's on a disk. The
minimum size of the allocation group is 8K blocks, or 32 MB, so smaller
volumes may contain fewer than that (possibly 1).
> and is there an upper
> bound for the maximum number of files which can be opened and written at the
> same moment in jfs??
No, the "locks" on the AG due to an open file being written to it are
not absolute. If a new allocation is needed and no free blocks are
available in an unlocked AG, it will find space in the locked AG, which
will probably lead to fragmentation of the files being created.
> 2. jfs is so called extent-based allocation. how jfs knows the right size of
> extent to allocate to a fixed file?
The extent is initially small, and grows as long as new allocations are
contiguous with the already-allocate blocks.
> and growing file?
Whenever we're allocating space to the end of a file, the allocator
tries to use the blocks immediately after the last allocated block. As
long as these blocks (1 block typically) are free, the existing extent
is grown to include the new blocks.
> the stat i found shows
> that majority of my files ( <= 32MB ) are single fragment file (number of
> extent = 1). I would really like to understand the "magic" how it can be
It seems as if this "magic" works pretty well. :-) I originally came
up with this idea of "locking" the AGs to avoid fragmentation because it
was easier and quicker to implement than preallocation or delayed
> The findings and rationale behind will lead us to a filesystem change. I
> would be very gladful if anyone can help me. Thankyou very much!
I hope my answers were helpful.
IBM Linux Technology Center