I am testing changes to rsync and evaluating several file systems to run it on.
What is the recognized way research papers on this area use to generate data?
i was halfway writing a script that would generate 500~3000Gb of data, taking into account compressible data, non-compressible data, sparse files, big files, etc.
But surely there must be already something like that
Aucun commentaire:
Enregistrer un commentaire