I will not address the memory management problem, if there is one here.
I am addressing your method of determing a difference.
No offense, but it seems extremely inefficient, and the larger the file you are computnig the md5sum on the more inefficient.
Why not use the file system's "modified" flag, or even, depending on the types of changes you are expecting you could just check for a file-size change. Using either of these means that you wouldn't even have to open the file, let alone, slog through x
bytes to compute an md5hash value. In terms of both computer resource usage and speed this would be a far better method in my opinion to the method you currently use.
> Hi folks,
> I've written a shell script running under CygWin, the purpose of which is to
> monitor a file for changes. If the MD5 hash fails to match the previous hash, it
> will execute a command to process the file. I used a 1-second delay between
> checks of the hash. This works great for several hours, but then gives an "out
> of memory" error and actually brings Windows 7 to its knees.
> The script uses a loop within a loop; the outer loop is infinite by design, and
> the inner loop ends when it finds a non-matching hash and processes the file. It
> broke while running the inner loop, without the file having been modified at
> that point in time. The file was modified numerous times previously, triggering
> the code below the inner loop, but not around the time when the memory error
> I am just wondering why the loops here are consuming increasing amounts of
> memory over time? I'm assigning new MD5 values into existing variables over and
> over, not allocating new variables for each MD5 assignment. (Right??) Is 1
> second perhaps too short a delay... does the system need time to deallocate
> something between each iteration of the inner loop?