> Dear all.
> By default Linux kernel uses 1k FS block size.
> AFAIK, Squash FS is mainly used on embededded system with 4k page size
> and flash memory. And almost all VFS operation are made on whole
> Increasing FS block size from 1k to 4k could speed up read accesses
> and reduce elevator overhead. It should reduce system boot-up time.
> I've made several tests and they show up to 7% booting time reduction.
> Could You explain, why 1k was chosen as a default FS block size?
This shows some confusion.
Linux I/O is conceptually split into two layers the block cache and the
page cache. Buffers in the block cache are of the underlying block
device's sector size which is normally 512 bytes. Page cache buffers
are normally 4K (the page size).
A filesystem is free to use any block size, which doesn't have to be
either the underlying block device sector size or the page size.
However, most often Linux filesystems have 4K block sizes matching the
Irrespective of the filesystem block size and the sector size, Linux
(the VFS layer) performs I/O on the filesystem in page size units or 4K.
For a given 4K I/O request it is the responsibility of the filesystem
to map that to one of its (possibly different size) blocks, which in
turn it maps down to a series of sector size I/O requests to the
underlying block device.
Prior to version 2.0 Squashfs used 32K blocks, which meant for a given
4K VFS request it read 32K of data, and pushed the remaining 28K into
the page cache itself. Squashfs from 2.0 to 3.2 used 64K blocks, and
the latest 3.3 version uses by default 128K blocks.
So to directly answer your question, Squashfs doesn't use 1K blocks, it
already uses 128K blocks.