How can I use Python to transform MongoDB's bsondump into JSON?


So I have an enormous quantity of .bson from a MongoDB dump. I am using bsondump on the command line, piping the output as stdin to python. This successfully converts from BSON to 'JSON' but it is in fact a string, and seemingly not legal JSON.

For example an incoming line looks like this:

{ "_id" : ObjectId( "4d9b642b832a4c4fb2000000" ),
  "acted_at" : Date( 1302014955933 ),
  "created_at" : Date( 1302014955933 ),
  "updated_at" : Date( 1302014955933 ),
  "_platform_id" : 3,
  "guid" : 72106535190265857 }

Which I belive is Mongo Extended JSON.

When I read in such a line and do:

json_line = json.dumps(line)

I get:

"{ \"_id\" : ObjectId( \"4d9b642b832a4c4fb2000000\" ),
\"acted_at\" : Date( 1302014955933 ),
\"created_at\" : Date( 1302014955933 ),
\"updated_at\" : Date( 1302014955933 ),
\"_platform_id\" : 3,
\"guid\" : 72106535190265857 }\n"

Which is still <type 'str'>.

I have also tried

json_line = json.dumps(line, default=json_util.default)

(see pymongo json_util - spam detection prevents a third link ) Which seems to output the same as dumps above. loads gives an error:

json_line = json.loads(line, object_hook=json_util.object_hook)
ValueError: No JSON object could be decoded

So, how can I transform the string of TenGen JSON into parseable JSON? (the end goal is to stream tab separated data to another database)

8/8/2012 5:59:29 PM

Accepted Answer

What you have is a dump in Mongo Extended JSON in TenGen mode (see here). Some possible ways to go:

  1. If you can dump again, use Strict output mode through the MongoDB REST API. That should give you real JSON instead of what you have now.

  2. Use bson from to read the BSON you already have into Python data structures and then do whatever processing you need on those (possibly outputting JSON).

  3. Use the MongoDB Python bindings to connect to the database to get the data into Python, and then do whatever processing you need. (If needed, you could set up a local MongoDB instance and import your dumped files into that.)

  4. Convert the Mongo Extended JSON from TenGen mode to Strict mode. You could develop a separate filter to do it (read from stdin, replace TenGen structures with Strict structures, and output the result on stdout) or you could do it as you process the input.

Here's an example using Python and regular expressions:

import json, re
from bson import json_util

with open("data.tengenjson", "rb") as f:
    # read the entire input; in a real application,
    # you would want to read a chunk at a time
    bsondata =

    # convert the TenGen JSON to Strict JSON
    # here, I just convert the ObjectId and Date structures,
    # but it's easy to extend to cover all structures listed at
    jsondata = re.sub(r'ObjectId\s*\(\s*\"(\S+)\"\s*\)',
                      r'{"$oid": "\1"}',
    jsondata = re.sub(r'Date\s*\(\s*(\S+)\s*\)',
                      r'{"$date": \1}',

    # now we can parse this as JSON, and use MongoDB's object_hook
    # function to get rich Python data structures inside a dictionary
    data = json.loads(jsondata, object_hook=json_util.object_hook)

    # just print the output for demonstration, along with the type

    # serialise to JSON and print

Depending on your goal, one of these should be a reasonable starting point.

8/10/2012 8:56:50 AM

loading an entire bson document into python memory is expensive.

If you want to stream it in rather than loading the whole file and doing a load all, you can try this library.

from bsonstream import KeyValueBSONInput
from sys import argv
for file in argv[1:]:
    f = open(file, 'rb')
    stream = KeyValueBSONInput(fh=f,  fast_string_prematch="somthing") #remove fast string match if not needed
    for id, dict_data in stream:
        if id:
         ...process dict_data...

Licensed under: CC-BY-SA with attribution
Not affiliated with: Stack Overflow