Sort (Link Bibliography)

“Sort” links:

  1. DNM-archives

  2. ⁠, Daniel Burfoot (2011-04-28):

    This book presents a methodology and philosophy of empirical science based on large scale lossless data compression. In this view a theory is scientific if it can be used to build a data compression program, and it is valuable if it can compress a standard benchmark database to a small size, taking into account the length of the compressor itself. This methodology therefore includes an Occam principle as well as a solution to the problem of demarcation. Because of the fundamental difficulty of lossless compression, this type of research must be empirical in nature: compression can only be achieved by discovering and characterizing empirical regularities in the data. Because of this, the philosophy provides a way to reformulate fields such as computer vision and computational linguistics as empirical sciences: the former by attempting to compress databases of natural images, the latter by attempting to compress large text databases. The book argues that the rigor and objectivity of the compression principle should set the stage for systematic progress in these fields. The argument is especially strong in the context of computer vision, which is plagued by chronic problems of evaluation.

    The book also considers the field of machine learning. Here the traditional approach requires that the models proposed to solve learning problems be extremely simple, in order to avoid overfitting. However, the world may contain intrinsically complex phenomena, which would require complex models to understand. The compression philosophy can justify complex models because of the large quantity of data being modeled (if the target database is 100 Gb, it is easy to justify a 10 Mb model). The complex models and abstractions learned on the basis of the raw data (images, language, etc) can then be reused to solve any specific learning problem, such as face recognition or machine translation.

  3. ⁠, Saaru Lindestøkke (2021-03-15):

    I’m compressing ~1.3 GB folders each filled with 1440 JSON files and find that there’s a 15-fold difference between using the tar command on macOS or Raspbian 10 (Buster) and using Python’s built-in tarfile library…The output is:

    • zsh tar filesize: 23.7 MB
    • py tar filesize: 1.49 MB

    …The zsh archive uses an unknown order, and the Python archive orders the file by modification date. I am not sure if that matters…EDIT: Ok, I think I found the issue: BSD tar and GNU tar without any sort options put the files in the archive in an undefined order… I think the reason sorting has such an impact is as follows:

    My JSON files contain measurements from hundreds of sensors. Every minute I read out all sensors, but only a few of these sensors have a different value from minute to minute. By sorting the files by name (which has the creation time at the beginning of it), two subsequent files have very little different characters between them. Apparently this is very favourable for the compression efficiency.

  4. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.478.96&rep=rep1&type=pdf

  5. http://linux.die.net/man/1/detox

  6. Silk-Road

  7. http://neoscientists.org/~tmueller/binsort/

  8. https://old.reddit.com/r/SilkRoad/comments/1yzndq/a_trip_down_memory_lane_vladimir_limetless_and/

  9. http://ck.kolivas.org/apps/lrzip/

  10. https://wiki.archlinux.org/index.php/Lrzip

  11. https://stackoverflow.com/questions/357560/sorting-multiple-keys-with-unix-sort

  12. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.473.7179&rep=rep1&type=pdf