Lazy Method for Reading Big File in Python?


I have a very big file 4GB and when I try to read it my computer hangs. So I want to read it piece by piece and after processing each piece store the processed piece into another file and read next piece.

Is there any method to yield these pieces ?

I would love to have a lazy method.

2/6/2009 9:25:01 AM

Accepted Answer

To write a lazy function, just use yield:

def read_in_chunks(file_object, chunk_size=1024):
    """Lazy function (generator) to read a file piece by piece.
    Default chunk size: 1k."""
    while True:
        data =
        if not data:
        yield data

f = open('really_big_file.dat')
for piece in read_in_chunks(f):

Another option would be to use iter and a helper function:

f = open('really_big_file.dat')
def read1k():

for piece in iter(read1k, ''):

If the file is line-based, the file object is already a lazy generator of lines:

for line in open('really_big_file.dat'):
2/6/2009 9:30:56 AM

If your computer, OS and python are 64-bit, then you can use the mmap module to map the contents of the file into memory and access it with indices and slices. Here an example from the documentation:

import mmap
with open("hello.txt", "r+") as f:
    # memory-map the file, size 0 means whole file
    map = mmap.mmap(f.fileno(), 0)
    # read content via standard file methods
    print map.readline()  # prints "Hello Python!"
    # read content via slice notation
    print map[:5]  # prints "Hello"
    # update content using slice notation;
    # note that new content must have same size
    map[6:] = " world!\n"
    # ... and read again using standard file methods
    print map.readline()  # prints "Hello  world!"
    # close the map

If either your computer, OS or python are 32-bit, then mmap-ing large files can reserve large parts of your address space and starve your program of memory.

Licensed under: CC-BY-SA with attribution
Not affiliated with: Stack Overflow