Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
joachimrees1
Active Contributor


I’m currently doing data migration in a specific problem domain, but I think what I’m sharing here can be applied very generically.

 

The checks you usually do on an input file can be supported (and automated) by small GNU Tools.

 

As an example, it was agreed, that the input file will not contain duplicates - let's check if it does.

Also, it's always a good idea to know how many lines you are dealing with, so that is where we start:

 

1. get the number of lines:

cat [filename] | wc -l

 

2. get the number of unique lines:

cat [filename] | uniq | wc -l

 

(If the numbers are the same, there are no adjacent duplicates)

 

3. Maybe we have duplicates spread over the file? (-> so they are non-adjacent?) Let's check:

sort [filename] | uniq | wc -l

 

4. If we find duplicates we want to give a qualified feedback, like: what where the duplicate lines (-d) and how often do they appear in the File (-c):

 

sort [filename] | uniq -dc > duplicate_lines_please_check.txt

 

explanation:

uniq will write the unique lines of the input file to standard output.

-d will give (only) the duplicate lines - one line per duplicate

-D will give ALL the duplicates lines

-c also add's the info how often a line appears.

 

[Edit:

 

Another option that might be useful is

-u -> only consider lines unique in the Input-File

 

-> I've assumed if there a duplicate lines in the Input file, one of them is a good one (wanted) and only it's duplicates are bad ones (un-wanted).

 

So if the file  for example contains :

 

111

111

22

 

My output would be

111

22

 

Another approach might be: if a line is duplicative, it's nonsense for sure. You would only want 1 line in the example:

22

 

In this case, the -u option is just what you want: it makes sure only unique lines of the input file are considered.

]

5 Comments
Labels in this area