How to extract and manipulate CSV data from the command line.
Download in .ogv format
Thanks a lot! Such a concise and "let's get started without talking log" video helps me a lot to learn efficiently in the CLI!
I recommend the csv module in python.
It is really easy to use and handles fields with embedded newlines, quotes, commas with flying colors!
Give it a shot some time, you won't be disappointed :).
Most CSV files I get these days have quoted strings containing commas inside the quoted field between the separators.
You'd better be very sure of your CSV contents if you're going to use cut like this.
It'll fail on this CSV file:
My original thinking was to use grep:
grep -i '[^,]*@[^,]*' stuff.csv ...
to extract the email but I decided on cut to simplify the "look" of it a bit for the video.
Of course your sample doesn't include any emails so that wouldn't work... so you'd have to adapt/change it to extract the right field.
A perhaps more robust method is to convert the CSV file to use tabs rather than commas. With tabs cut should work most of the time. Still might be that you'd have quoted tabs, but unless your CSV came from a database dump that seems unlikely.
Mitch Frazier is an Associate Editor for Linux Journal.
That grep will identify lines that are CSV, but won't parse out the values. And yes, I deal with tons of huge CSV files that go in and out of databases, so it's important that I get my scripts right.
The only thing I've found that comes close to what I need is http://perlmeme.org/tutorials/parsing_csv.html -- so I'm in the process of rewriting that in to something bash can invoke from inside shell scripts.
Forgot the -o option:
grep -o -i '[^,]*@[^,]*' stuff.csv ...
that will show you only the part that matched.
Sure, that'll extract an email address, but it doesn't fit any accepted definition of "parsing," since it assumes that you have prior knowledge of what's in the field and can't properly handle cases of having more than one @ in any line.
You're handling strings of text of which you can predict formatting, but you are certainly not using techniques that are transferrable to the general case of "I need to parse a CSV file."
You're 100% right, neither cut nor grep are generalized solutions for parsing CSV files, but then again I never said they were. As far as I know I never used the word parse, that was your word.
What I meant to say was that grep is both a floor wax and a dessert topping.
GNU sort has a --unique (-u) argument that checks sorts and checks for uniqueness in the same pass. So you can just use "sort -uf" or "sort --ignore-case --unique" skipping uniq altogether.
sort --ignore-case --unique
Yes it does and that's why I always say one should re-RTFM once in a while to refresh one's memory and check for new features.
I might also add that one shouldn't jump to the the conclusion that the -u option makes uniq obsolete: uniq has some other useful options that come in handy once in a while. So re-RTFM applies to uniq also.
I'm glad somebody mentioned this; I found out about that a couple of months ago. Much better than having to pipe to yet another process :D