Skip to Content
Technical Articles
Author's profile photo Joachim Rees

Using GNU tools to quickly check your input files – duplicates lines

I’m currently doing data migration in a specific problem domain, but I think what I’m sharing here can be applied very generically.

 

The checks you usually do on an input file can be supported (and automated) by small GNU Tools.

 

As an example, it was agreed, that the input file will not contain duplicates – let’s check if it does.

Also, it’s always a good idea to know how many lines you are dealing with, so that is where we start:

 

1. get the number of lines:

cat [filename] | wc -l

 

2. get the number of unique lines:

cat [filename] | uniq | wc -l

 

(If the numbers are the same, there are no adjacent duplicates)

 

3. Maybe we have duplicates spread over the file? (-> so they are non-adjacent?) Let’s check:

sort [filename] | uniq | wc -l

 

4. If we find duplicates we want to give a qualified feedback, like: what where the duplicate lines (-d) and how often do they appear in the File (-c):

 

sort [filename] | uniq -dc > duplicate_lines_please_check.txt

 

explanation:

uniq will write the unique lines of the input file to standard output.

-d will give (only) the duplicate lines – one line per duplicate

-D will give ALL the duplicates lines

-c also add’s the info how often a line appears.

 

[Edit:

 

Another option that might be useful is

-u -> only consider lines unique in the Input-File

 

-> I’ve assumed if there a duplicate lines in the Input file, one of them is a good one (wanted) and only it’s duplicates are bad ones (un-wanted).

 

So if the file  for example contains :

 

111

111

22

 

My output would be

111

22

 

Another approach might be: if a line is duplicative, it’s nonsense for sure. You would only want 1 line in the example:

22

 

In this case, the -u option is just what you want: it makes sure only unique lines of the input file are considered.

]

Assigned Tags

      4 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Christian Drumm
      Christian Drumm

      Hi Joachim,

      great blog. Lots of people in the SAP world don't know or don't use the standard Unix tools. It's definitely a good idea to raise awareness.

      one tool I'd add to the list of must-know-unix-tools is sed (https://en.wikipedia.org/wiki/Sed). Especially when needing to manipulate large (migration) files sed becomes very handy.

      Christian

      Author's profile photo Joachim Rees
      Joachim Rees
      Blog Post Author

      Thanks for your feedback, Christian!

      Maby the next time you solve a problem with sed, I'll read your blog about it!? 😉

      best

      Joachim

      Author's profile photo Joachim Rees
      Joachim Rees
      Blog Post Author

      I wrote a follow-up, showcasing (one smal aspekt) of the awk-tool:

      GNU Tools for checking input files: using awk to check for duplicate keys

      Author's profile photo Joachim Rees
      Joachim Rees
      Blog Post Author

      Those GNU tools, I still like them and the possibilities they offer when working with text files. 😉

      ...and I do like text files!