I, like most people, never realized I'd be dealing with large files.
Oh, I knew there would be some files with megabytes of data, but
I never suspected I'd be begging Perl to process hundreds of megabytes
of XML, nor that this week I'd be asking Python to process 6.4 gigabytes
of CSV into 6.5 gigabytes of XML1.
As a few out-of-memory experiences will teach you, the trick for
dealing with large files is pretty easy: use code that treats
everything as a stream.
For inputs, read from disk in chunks. For outputs,
frequently write to disk and let system memory forge onward unburdened.
When reading and writing files yourself, this is easier to do correctly...
from__future__importwith_statement# for python 2.5
Python has an excellent csv library, which can handle
large files right out of the box. Sort of.
>> import csv>> r = csv.reader(open('doc.csv', 'rb'))>>> forrowinr:... printrow... Traceback (most recent call last):
File "<stdin>", line 1, in <module>_csv.Error: field larger than field limit (131072)
Staring at the module documentation2, I couldn't find anything
of use. So I cracked open the csv.py file and confirmed what the _csv
in the error message suggests: the bulk of the module's code (and the
input parsing in particular) is implemented in C rather than Python.
After a while staring at that error, I began dreaming of
how I would create a stream pre-processor using StringIO,
but it didn't take too long to figure out I would need to recreate
my own version of csv in order to accomplish that.
So back to the blogs, one of which held the magic
grain of information I was looking for: csv.field_size_limit.
Yep. That's all there is to it. The sucker just works after that.
Well, almost. I did run into an issue with a NULL byte 1.5 gigs into
the data. Because the streaming code is written using C based IO, the
NULL byte shorts out the reading of data in an abrupt and non-recoverable
manner. To get around this we need to pre-process the stream somehow,
which you could do in Python by wrapping the file with a custom class
that cleans each line before returning it, but I went with some
command line utilities for simplicity.
cat data.in | tr -d '\0' > data.out
After that, the 6.4 gig CSV file processed without any issues.
Creating Large XML Files in Python
This part of the process, taking each row of csv and converting
it into an XML element, went fairly smoothly thanks to the
xml.sax.saxutils.XMLGenerator class. The API for creating
elements isn't an example of simplicity, but it is--unlike many
of the more creative schemes--predictable, and has one killer
feature: it correctly writes output to a stream.
As I mentioned, the mechanism for creating elements was a bit
verbose, so I made a couple of wrapper functions to simplify
(note that I am sending output to standard out, which lets me
simply print strings to the file I am generating, for example
creating the XML file's version declaration).
The one issue I did run into (in my data) was some
pagebreak characters floating around (^L aka 12 aka x0c)
which were tweaking the XML encoder, but you can strip
them out in a variety of places, for example by rewriting
the main loop:
Really, the XMLGenerator just worked, even when dealing
with a quite large file.
Although my script created a different mix of XML elements
than the above example, it wasn't any more complex, and had
fairly reasonable performance. Processing of the 6.4 gig CSV
file into a 6.5 gig XML file took between 19 - 24 minutes,
which means it was able to read-process-write about five
megabytes per second.
In terms of raw speed, that isn't particularly epic, but performing
a similar operation (was actually XML to XML rather than CSV to XML)
with Perl's XML::Twig it took eight minutes to process a ~100
megabyte file, so I'm pretty pleased with the quality of the Python
standard library and how it handles large files.
The breadth and depth of the standard library really makes Python a joy to work with for these simple one-shot scripts.
If only it had Perl's easier to use regex syntax...
This is a peculiar nature of data, which makes
it different from media: data files can--with a large system--become
infinitely large. Media files, on the other hand, can be extremely
dense (a couple of gigs for a high quality movie), but conform
to predictable limits.
If you are dealing with large files, you're probably
dealing with a company's logs from the last decade or
the entire dump of their MySQL database.↩
I really want to like the new Python documentation.
I mean, it certainly looks much better, but I think it has made
it harder to actually find what I'm looking for.
I think they've hit the same stumbling block as the Django
documentation: the more you customize your documentation,
the greater the learning curve for using your documentation.
I think the big thing is just the incompleteness of the
documentation that gives me trouble. They are certain to
cover all the important and frequently used components
(along with helpful overviews and examples), but the new
docs often don't even mention less important methods and
For the time being, I am throwing around a lot more dir