Package: agedu
Version: 20211129
Revision: 1
Description: Util for tracking down wasted disk space
License: BSD
# Free to update, modify, and take over
Maintainer: Hanspeter Niederstrasser <nieder@users.sourceforge.net>
Source: https://www.chiark.greenend.org.uk/~sgtatham/%n/%n-%v.8cd63c5.tar.gz
Source-Checksum: SHA256(ceaee592ef21b8cbb254aa7e9c5d22cefab24535e137618a4d0af591eba8339f)
BuildDepends: <<
	cmake,
	fink-buildenv-modules
<<
CompileScript: <<
	#!/bin/sh -ev
	. %p/sbin/fink-buildenv-cmake.sh
	mkdir finkbuild
	pushd finkbuild
		cmake $FINK_CMAKE_ARGS ..
		make -w
	popd
<<

InstallScript: <<
	cd finkbuild; make install DESTDIR=%d
<<
DocFiles: LICENCE TODO
Homepage: https://www.chiark.greenend.org.uk/~sgtatham/agedu/
DescDetail: <<
Suppose you're running low on disk space. You need to free some up, by
finding something that's a waste of space and deleting it (or moving it
to an archive medium). How do you find the right stuff to delete, that
saves you the maximum space at the cost of minimum inconvenience?

Unix provides the standard du utility, which scans your disk and tells
you which directories contain the largest amounts of data. That can help
you narrow your search to the things most worth deleting.

However, that only tells you what's big. What you really want to know is
what's too big. By itself, du won't let you distinguish between data
that's big because you're doing something that needs it to be big, and
data that's big because you unpacked it once and forgot about it.

Most Unix file systems, in their default mode, helpfully record when a
file was last accessed. Not just when it was written or modified, but
when it was even read. So if you generated a large amount of data years
ago, forgot to clean it up, and have never used it since, then it ought
in principle to be possible to use those last-access time stamps to tell
the difference between that and a large amount of data you're still
using regularly.

agedu is a program which does this. It does basically the same sort of
disk scan as du, but it also records the last-access times of everything
it scans. Then it builds an index that lets it efficiently generate
reports giving a summary of the results for each subdirectory, and then
it produces those reports on demand. 
<<