Skip to Content

Small tips matter a lot !

We had a situation where 3 TB of database (datafiles) needed to be deleted from filesystems (sapdata1..2..3…….12).

There are 300 files of 10 GB and 2 files of 2 GB.

Traditional linux rm command used to take 1 min per 10 GB taking in total 5 hours.

Even firing multiple rm processes in background did not help much.

Necessity forced me to google and I hit a great link.

I learnt that truncate function in perl can be used efficiently for this scenario.

What truncate does? It truncates a given file by a said amount (in bytes).

For example: truncate ( <file> 10 )   will truncate <file> by 10 bytes.

If <file> was of 50 bytes, it will then become 40 bytes.

Did some tests, truncating with 10MB, 100MB, 1MB & 5B and found that truncating chunks of 100 MB took the least time.

First, I built list of file to be deleted. For example dbfiles.lst

Sample contents of dbfiles.lst is as below








Then I wrote two scripts.


for (( c=1; c <= 2048; i++ ))


for i1 in `cat dbfiles.lst`


i2=`du -b $i1`

./ $i2 &



#!/usr/bin/perl -w




open (my $FILE, “+<“, $n1);

truncate( $FILE, $s1 );


Executed the script in background.

Monitored the deletion speed every 60 seconds.

Hurray! The average deletion speed was around 1 TB per minute.

Deletion was completed in 3 minutes vs. 5 hours with traditional rm command. Unimaginable Gain !!

Hope this helps someone with similar requirement.

Good Luck !

To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply