Out of curiosity, what database are you using to store the data?
By default it writes metadata about the stream (title, description, etc) using a file based db called nedb, and it appends the actual logged data to CSV files that are split into 500k chunks. When the user requests their logged data, all of the files are stitched back together, converted into the requested format (JSON, CSV, etc), and streamed to the user’s web client.
For the production server, we are currently using MongoDb for metadata storage and the same CSV module for logged data storage.