This is a tiny wrapper around the json module in the Python standard library, allowing to read a json file incrementally.
This can be useful for
It is an alternative to ijson (written when I did not know ijson existed, but in the end more efficient).
No extensive tests were made (if you make them, let me know), but here are the results (in seconds) obtained in opening a local file with 384650 objects, totalling 174 MB:
|Parser||Iteration 1||Iteration 2|
|standard (non-incremental) json||9.511||9.273|
|ijson (with yajl2 backend)||62.250||64.538|
|pure python jsaone||421.641||421.821|
Those results were obtained with the script “tests/json_load_test.py”.
Clearly those numbers are affected by the speed of the CPU and of the medium/stream. In particular, since the test was made on a file from a local hard disk, the bottleneck was clearly the CPU, and hence it is disadvantageous for incremental parsers (including jsaone). If the bottleneck is given by the medium/stream, jsaone should even outperform the standard json, which will start processing only after the entire stream is received.
Because it sounds similar to “json”… but the Saône is a (large) stream.
python3 setup.py build_ext --inplace
(replace “python3” with “python” if you are using Python 2).
import jsaone with open('/path/to/my/file.json') as f: gen = jsaone.load(f) for key, val in gen: ...
You can browse the git repo here or clone with
git clone git://pietrobattiston.it/jsaone
For bugs and enhancements, just write me - firstname.lastname@example.org - ideally pointing to a git branch solving the issue/providing an enhancement.
Jsaone should be able to parse any compliant json string… so if you find one on which it fails, please let me know!
Released under the GPL 3. Feel free to contact me if this is a problem for you (and GPL 2 is not).