Holy-wars (Link Bibliography)

“Holy-wars” links:

  1. #bitrot

  2. #bitcreep

  3. ⁠, Eric S. Raymond (2003-12-29):

    holy wars: n.

    [from Usenet⁠, but may predate it; common] n. flame wars over religious issues⁠. The paper by Danny Cohen that popularized the terms big-endian and little-endian in connection with the LSB-first/​​​​MSB-first controversy was entitled On Holy Wars and a Plea for Peace.

    Great holy wars of the past have included ITS vs.: Unix⁠, Unix vs.: VMS⁠, BSD Unix vs.: System V, C vs.: Pascal⁠, C vs.: FORTRAN, etc. In the year 2003, popular favorites of the day are KDE vs, GNOME, vim vs. elvis, Linux vs. [Free|Net|Open]BSD. Hardy perennials include EMACS vs.: vi⁠, my personal computer vs.: everyone else’s personal computer, ad nauseam. The characteristic that distinguishes holy wars from normal technical disputes is that in a holy war most of the participants spend their time trying to pass off personal value choices and cultural attachments as objective technical evaluations. This happens precisely because in a true holy war, the actual substantive differences between the sides are relatively minor. See also theology⁠.

  4. 1981-cohen.pdf: ⁠, Daniel Cohen (1981-10-01; cs):

    Which bit should travel first? The bit from the big end or the bit from the little end? Can a war between Big Endians and Little Endians be avoided?

    This article was written in an attempt to stop a war. I hope it is not too late for peace to prevail again. Many believe that the central question of this war is, What is the proper byte order in messages? More specifically, the question is, Which bit should travel first-the bit from the little end of the word or the bit from the big end of the word? Followers of the former approach are called Little Endians, or Lilliputians; followers of the latter are called Big Endians, or Blefuscuians. I employ these Swiftian terms because this modern conflict is so reminiscent of the holy war described in Gulliver’s Travels.

    …To sum it all up, there are two camps, each with its own language. These languages are as compatible with each other as any Semitic and Latin languages. All Big Endians can talk only to each other. So can all the Little Endians, although there are some differences among the dialects used by different tribes. There is no middle ground—only one end can go first. As in all the religious wars of the past, power—not logic—will be the decisive factor. This is not the first holy war, and will probably not be the last. The “Reasonable, do it my way” approach does not work. Neither does the Esperanto approach of switching to yet another new language. Lest our communications world split along theses lines, we should take note of a certain book (not mentioned in the references), which has an interesting story about a similar phenomenon: the Tower of Babel. Lilliput and Blefuscu will never come to terms of their own free will. We need some Gulliver between the two islands to force a unified communication regime on all of us.

    Of course, I hope that my way will be chosen, but it is not really critical. Agreement upon an order is more important than the order agreed upon.

    Shall we toss a coin?

  5. ⁠, Armin Ronacher (2019-12-28):

    “Legacy code is bad and if you keep using it, it’s really your own fault.” There are many variations of the same thing floating around in Open Source communities and it always comes down to the same thing: at one point something is being declared old and it has to be replaced by something newer which is better. That better typically has some really good arguments on its side: we learned from our mistakes, it was wrong to begin with or something along the lines of it being impure or that it propagated bad ideas…Some communities as a whole for instance are suffering from this a whole lot. Every few years a library or the entire ecosystem of that community is thrown away and replaced by something new and support for the old one ends abruptly and arbitrarily. This has happened to the packaging ecosystem, the interpreter itself, modules in the standard library etc.

    …This largely works because the way open source communities are managing migrations is by cheating and the currency of payment is emotional distress. Since typically money is not involved (at least not in the sense that a user would pay for the product directly) there is no obvious monetary impact of people not migrating. So if you cause friction in the migration process it won’t hurt you as a library maintainer. If anything the churn of some users might actually be better in the long run because the ones that don’t migrate are likely also some of the ones that are the most annoying in the issue tracker…Since the migration causes a lot of emotional distress, the cheat is carried happily by the entire community…I have been a part of the Python 3 migration and I can tell you that it sucked out all my enjoyment of being a part of that community. No matter on which side you were during that migration I heard very little positive about that experience.

    …A big reason why this all happens in the first place is because as an Open Source maintainer the standard response which works against almost all forms of criticism is “I’m not paid for this and I no longer want to maintain the old version of X”. And in fact this is a pretty good argument because it’s both true, and very few projects actually are large enough that a fork by some third party would actually survive. Python for instance currently has a fork of 2.7 called Tauthon which got very little traction.

    There are projects which are clearly managing such forceful transitions, but I think what is often forgotten is that with that transition many people love the community who do not want to participate in it or can’t…I honestly believe a lot of Open Source projects would have an easier time existing if they would acknowledge that these painful migrations are painful for everybody involved.

  6. https://archive.org/details/isbn_9780070340220/

  7. https://landing.google.com/sre/sre-book/chapters/service-level-objectives/

  8. ⁠, Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei (2020-05-28):

    Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions—something which current NLP systems still largely struggle to do.

    Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10× more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3’s few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora.

    Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.

    …The precise architectural parameters for each model are chosen based on computational efficiency and load-balancing in the layout of models across GPU’s. Previous work suggests that validation loss is not strongly sensitive to these parameters within a reasonably broad range.

  9. GPT-3#prompts-as-programming

  10. https://www.kernel.org/doc/Documentation/process/stable-api-nonsense.rst

  11. Backstop

  12. ⁠, Seneca (Bohn's Classical Library Edition of L. Annaeus Seneca, Minor Dialogues Together with the Dialog 'On Clemency') (1990):

    Let us seek for some blessing, which does not merely look fine, but is sound and good throughout alike, and most beautiful in the parts which are least seen…I follow nature, which is a point upon which every one of the Stoic philosophers are agreed: true wisdom consists in not departing from nature and in moulding our conduct according to her laws and model.

    A happy life, therefore, is one which is in accordance with its own nature, and cannot be brought about unless in the first place the mind be sound and remain so without interruption, and next, be bold and vigorous, enduring all things with most admirable courage, suited to the times in which it lives, careful of the body and its appurtenances, yet not troublesomely careful. It must also set due value upon all the things which adorn our lives, without over-estimating any one of them, and must be able to enjoy the bounty of Fortune without becoming her slave.

    You understand without my mentioning it that an unbroken calm and freedom ensue, when we have driven away all those things which either excite us or alarm us: for in the place of sensual pleasures and those slight perishable matters which are connected with the basest crimes, we thus gain an immense, unchangeable, equable joy, together with peace, calmness and greatness of mind, and kindliness: for all savageness is a sign of weakness.

  13. https://slatestarcodex.com/2016/05/02/be-nice-at-least-until-you-can-coordinate-meanness/

  14. Complement

  15. Bitcoin-is-Worse-is-Better

  16. Choosing-Software

  17. Resilient-Haskell-Software

  18. Google-shutdowns

  19. In-Defense-Of-Inclusionism

  20. Wikipedia-and-Other-Wikis

  21. Socks

  22. Timing

  23. https://vitalik.ca/general/2021/03/23/legitimacy.html

  24. ⁠, Paul Graham (2009-02):

    As a rule, any mention of religion on an online forum degenerates into a religious argument. Why? Why does this happen with religion and not with Javascript or baking or other topics people talk about on forums?

    …I think what religion and politics have in common is that they become part of people’s identity, and people can never have a fruitful argument about something that’s part of their identity. By definition they’re partisan.

    Which topics engage people’s identity depends on the people, not the topic. For example, a discussion about a battle that included citizens of one or more of the countries involved would probably degenerate into a political argument. But a discussion today about a battle that took place in the Bronze Age probably wouldn’t. No one would know what side to be on. So it’s not politics that’s the source of the trouble, but identity. When people say a discussion has degenerated into a religious war, what they really mean is that it has started to be driven mostly by people’s identities. [1: When that happens, it tends to happen fast, like a core going critical. The threshold for participating goes down to zero, which brings in more people. And they tend to say incendiary things, which draw more and angrier counterarguments.]

    …More generally, you can have a fruitful discussion about a topic only if it doesn’t engage the identities of any of the participants. What makes politics and religion such minefields is that they engage so many people’s identities. But you could in principle have a useful conversation about them with some people. And there are other topics that might seem harmless, like the relative merits of Ford and Chevy pickup trucks, that you couldn’t safely talk about with others.

    Most people reading this will already be fairly tolerant. But there is a step beyond thinking of yourself as x but tolerating y: not even to consider yourself an x. The more labels you have for yourself, the dumber they make you.

  25. ⁠, Nadia Eghbal (2016-06-08):

    [Post-⁠/​​​​ discussion of the economics of funding open source software: universally used & economically invaluable as a public good anyone can & does use, it is also essentially completely unfunded, leading to serious problems in long-term maintenance & improvement, exemplified by the Heartbleed bug—core cryptographic code run by almost every networked device on the planet could not fund more than a part-time developer.]

    Our modern society—everything from hospitals to stock markets to newspapers to social media—runs on software. But take a closer look, and you’ll find that the tools we use to build software are buckling under demand…Nearly all software today relies on free, public code (called “open source” code), written and maintained by communities of developers and other talent. Much like roads or bridges, which anyone can walk or drive on, open source code can be used by anyone—from companies to individuals—to build software. This type of code makes up the digital infrastructure of our society today. Just like physical infrastructure, digital infrastructure needs regular upkeep and maintenance. In the United States, over half of government spending on transportation and water infrastructure goes just to maintenance.1 But financial support for digital infrastructure is much harder to come by. Currently, any financial support usually comes through sponsorships, direct or indirect, from software companies. Maintaining open source code used to be more manageable. Following the personal computer revolution of the early 1980s, most commercial software was proprietary, not shared. Software tools were built and used internally by companies, and their products were licensed to customers. Many companies felt that open source code was too nascent and unreliable for commercial use. In their view, software was meant to be charged for, not given away for free. Today, everybody uses open source code, including Fortune 500 companies, government, major software companies and startups. Sharing, rather than building proprietary code, turned out to be cheaper, easier, and more efficient.

    This increased demand puts additional strain on those who maintain this infrastructure, yet because these communities are not highly visible, the rest of the world has been slow to notice. Most of us take opening a software application for granted, the way we take turning on the lights for granted. We don’t think about the human capital necessary to make that happen. In the face of unprecedented demand, the costs of not supporting our digital infrastructure are numerous. On the risk side, there are security breaches and interruptions in service, due to infrastructure maintainers not being able to provide adequate support. On the opportunity side, we need to maintain and improve these software tools in order to support today’s startup renaissance, which relies heavily on this infrastructure. Additionally, open source work builds developers’ portfolios and helps them get hired, but the talent pool is remarkably less diverse than in tech overall. Expanding the pool of contributors can positively affect who participates in the tech industry at large.

    No individual company or organization is incentivized to address the problem alone, because open source code is a public good. In order to support our digital infrastructure, we must find ways to work together. Current examples of efforts to support digital infrastructure include the Linux Foundation’s Core Infrastructure Initiative and Mozilla’s Open Source Support (MOSS) program, as well as numerous software companies in various capacities. Sustaining our digital infrastructure is a new topic for many, and the challenges are not well understood. In addition, infrastructure projects are distributed across many people and organizations, defying common governance models. Many infrastructure projects have no legal entity at all. Any support strategy needs to accept and work with the decentralized, community-centric qualities of open source code. Increasing awareness of the problem, making it easier for institutions to contribute time and money, expanding the pool of open source contributors, and developing best practices and policies across infrastructure projects will all go a long way in building a healthy and sustainable ecosystem.

  26. https://www.amazon.com/gp/product/0578675862/

  27. https://medium.com/@steve.yegge/dear-google-cloud-your-deprecation-policy-is-killing-you-ee7525dc05dc

  28. ⁠, Jacques Mattheij (2020-08-26):

    …The benefits are obvious: fast turnaround time between spotting a problem and getting it to the customer, very low cost of distribution and last but definitely not least: automatic updates are now a thing…And that’s exactly the downside: your software will be more than happy to install a broken, changed, reduced, functionally no longer equivalent, spyware, malware, data loss inducing or outright dangerous piece of software right over the top of the one that you were using happily until today. More often than not automatic updates are not done with the interest of the user in mind. They are abused to the point where many users—me included—would rather forego all updates (let alone automatic ones) simply because we apparently can not trust the party on the other side of this transaction to have our, the users, interests at heart.

    It isn’t rare at all to be greeted by a piece of software that no longer reads the data that was perfectly legible until yesterday because of an upgrade (I had a CAD system like that). Regressing back to the previous version and you’ll find that it tells you the data is also no longer legible by that version because the newer one has touched it. Restore from backup and get caught in an automatic update war that you can only stop by telling your computer that the automatic update host does not exist any more. It shouldn’t take that level of sophistication to keep a system running reliably, especially not when your livelihood depends on it.

    …The list of these transgressions is endless, and software vendors the world over still don’t seem to get it. If updating software is so easy, why are users so reluctant to do it? That’s because all you software vendors royally messed it up. You’ve burned your users trust on so many occasions, not thinking from their perspective but from your own almost exclusively leading to people locking down their systems and foregoing critical security updates because they are scared that they will end up with a lot of extra work or a much worse situation if they let you have your way.

    So, software vendors, automatic updates:

    • should always keep the user centric
    • should be incremental and security or bug fixes only
    • should never update a user interface without allowing the previous one to be used as the default
    • should never be used to install telemetry or spyware or to re-enable it if it was previously switched off
    • should never be used to install other software packages without the users explicit consent and knowledge
    • should never change the format of data already stored on the system
    • should never cause a system to become unusable or unstable
    • must allow a revert to the previous situation
    • must be disablable, in an easy and consistent manner for instance on mobile devices
    • should never cause the system to become inaccessible or restarted without user consent
    • should always be signed by the vendor to ensure that the update mechanism does not become a malware vector
    • should never cause commercial messages or other fluff to be included
    • should never cause configuration details to be lost
    • should always be backwards compatible with previous plug-ins or other third party add ons
  29. https://blog.kronis.dev/articles/never-update-anything

  30. ⁠, Rachel Kroll (2020-08-09):

    While on my most recent break from writing, I pondered a bunch of things that keep seeming to come up in issues of reliability or maintainability in software. At least one of them is probably not going to make me many friends based on the reactions I’ve had to the concept in its larval form. Still, I think it needs to be explored.

    In short, I think it’s become entirely too easy for people using certain programming languages to use libraries from the wide world of clowns that is the Internet. Their ecosystems make it very very easy to become reliant on this stuff. Trouble is, those libraries are frequently shit. If something about it is broken, you might not be able to code around it, and may have to actually deal with them to get it fixed. Repeat 100 times, and now you have a real problem brewing.

    …When you ran it [the buggy library], it just opened the file and did a write. If you ran it a bunch of times in parallel, they’d all stomp all over each other, and unsurprisingly, the results sometimes yielded a config file that was not entirely parseable. It could have used flock() or something like that. It didn’t. It could have written to the result from a mktemp() type function and then used rename() to atomically drop it into place. It didn’t. Expecting that, I got a copy of their source and went looking for the spot which was missing the file-writing paranoia stuff. I couldn’t find it. All I found was some reference to this library that did config file reading and writing, and a couple of calls into it. The actual file I/​​​​O was hidden away in that other library which lived somewhere on the Internet…The only way to fix it would be in this third-party library. That would mean either forking it and maintaining it from there, or working with the upstream and hoping they’d take me seriously and accept it.

    …It seems to boil down to this: people rely on libraries. They turn out to be mostly crap. The more you introduce, the more likely it is that you will get something really bad in there. So, it seems like the rational approach would be to be very selective about these things, and not grab too many, if at all. But, if you work backwards, you can see that making it very easy to add some random library means that it’s much more likely that someone will. Think of it as an “attractive nuisance”. That turns the crank and the next thing you know, you have breathtaking dependency trees chock-full of dumb little foibles and lacking best practices.

    Now we have this conundrum. That one library lowered the barrier to entry for someone to write that tool. True. Can’t deny that. It let someone ship something that sometimes works. Also true. But, it gave them a false sense of completion and safety, when it is neither done nor safe. The tool will fail eventually given enough use, and (at least until they added the “ignore the failed read” thing), will latch itself into a broken state and won’t ever work again without manual intervention.

    Ask yourself: is that really a good thing? Do you want people being able to ship code like that without understanding the finer points of what’s going on? Yeah, we obviously have to make the point that the systems should not be so damned complicated underneath, and having to worry about atomic writes and locking is annoying as hell, but it’s what exists. If you’re going to use the filesystem directly, you have to solve for it. It’s part of the baggage which comes with the world of POSIX-ish filesystems.

  31. https://www.joelonsoftware.com/2000/06/03/strategy-letter-iii-let-me-go-back/

  32. https://jamesdixon.wordpress.com/2009/05/07/forking-protocol-why-when-and-how-to-fork-an-open-source-project/

  33. https://news.ycombinator.com/item?id=23753477