Internet Search Tips

A description of tips and tricks for effective Internet search of papers/books for research.
topics: archiving, technology, shell, Google, tutorial
created: 11 Dec 2018; modified: 26 Feb 2019; status: finished; confidence: certain; importance: 4


Over time, I developed a certain google-fu and expertise in finding references, papers, and books online. Some of these tricks are not well-known, like checking the Internet Archive (IA) for books. I try to write down my search workflow, and give general advice about finding and hosting documents.

Google-fu search skill is something I’ve prided myself ever since elementary school, when the librarian challenged the class to find things in the almanac; not infrequently, I’d win. The Internet is the greatest almanac of all, and to the curious, a never-ending cornucopia, so it makes me sad to see so many fail to find things—or not look at all.

Below, I’ve tried to provide, in a roughly chronological way, a flowchart of an online search.

Papers

Request

Last resort: if none of this works, there are a few places online you can request a copy (however, they will usually fail if you have exhausted all previous avenues):

Finally, you can always try to contact the author. This only occasionally works for the papers I have the hardest time with, since they tend to be old ones where the author is dead or unreachable—any author publishing a paper since 1990 will usually have been digitized somewhere—but it’s easy to try.

Post-finding

After finding a fulltext copy, you should find a reliable long-term link/place to store it and make it more findable:

  • never link LG/SH:

    Always operate under the assumption they could be gone tomorrow. (As indeed my uncle found out with Library.nu shortly after paying for a lifetime membership.) There are no guarantees either one will be around for long under their legal assaults, and no guarantee that they are being properly mirrored or will be restored elsewhere. Download anything you need and keep a copy of it yourself and, ideally, host it publicly.

  • never rely on a papers.nber.org/tmp/ or psycnet.apa.org URL, as they are temporary

  • never link Scribd: they are a scummy website which impede downloads, and anything on Scribd usually first appeared elsewhere anyway.

  • avoid linking to ResearchGate (compromised by investment & PDFs get deleted routinely, apparently often by authors) or Academia.edu (the URLs are one-time and break)

  • be careful linking to Nature.com (if a paper is not explicitly marked as Open Access, even if it’s available, it may disappear in a few months!); similarly, watch out for wiley.com, tandfonline.com, jstor.org, springer.com, springerlink.com, & mendeley.com

  • be careful linking to academic personal directories on university websites (often noticeable by the Unix convention .edu/~user/); they have short half-lives.

  • check & improve metadata.

    Adding metadata to papers/books is a good idea because it makes the file findable in G/GS (if it’s not online, does it really exist?) and helps you if you decide to use bibliographic software like Zotero in the future. Many academic publishers & LG are terrible about metadata, and will not include even title/author/DOI/year. PDFs can be easily annotated with metadata using ExifTool: : exiftool -All prints all metadata, and the metadata can be set individually using similar fields.

    For papers hidden inside volumes or other files, you should extract the relevant page range to create a single relevant file. (For extraction of PDF page-ranges, I use pdftk, eg: pdftk 2010-davidson-wellplayed10-videogamesvaluemeaning.pdf cat 180-196 output 2009-fortugno.pdf.)

    I try to set at least title/author/DOI/year/subject, and stuff any additional topics & bibliographic information into the “Keywords” field. Example of setting metadata:

  • if a scan, it may be worth editing the PDF to crop the edges, threshold to binarize it (which, for a bad grayscale or color scan, can drastically reduce filesize while increasing readability), and OCRing it. I use gscan2pdf but there are alternatives worth checking out.

  • if possible, host a public copy; especially if it was very difficult to find, even if it was useless, it should be hosted. The life you save may be your own.

  • for bonus points, link it in appropriate places on Wikipedia

Advanced

Aside from the highly-recommended use of hotkeys and Booleans for searches, there are a few useful tools for the researcher, which while expensive initially, can pay off in the long-term:

  • archiver-bot: automatically archive your web browsing and/or links from arbitrary websites to forestall linkrot; particularly useful for detecting & recovering from dead PDF links

  • PubMed & GS search alerts: set up alerts for a specific search query, or for new citations of a specific paper. (Google Alerts is not as useful as it seems.)

    1. PubMed has straightforward conversion of search queries into alerts: “Create alert” below the search bar. (Given the volume of PubMed indexing, I recommend carefully tailoring your search to be as narrow as possible, or else your alerts may overwhelm you.)
    2. To create generic GS search query alert, simply use the “Create alert” on the sidebar for any search. To follow citations of a key paper, you must: 1. bring up the paper in GS; 2. click on “Cited by X”; 3. then use “Create alert” on the sidebar.
  • Google Custom Search Engines (a GCSE is a specialized search queries limited to whitelisted pages/domains etc; eg my Wikipedia-focused anime/manga CSE. If you find yourself regularly including many domains in a search, or blacklisting domains with -site: or using many negations to filter out common false positives, it may be time to set up a GCSE.)

  • Clipping/note-taking services like Evernote/Microsoft OneNote: regularly making and keeping excerpts creates a personalized search engine, in effect.

    This can be vital for refinding old things you read where the search terms are hopelessly generic or you can’t remember an exact quote or reference; it is one thing to search a keyword like “autism” in a few score thousand clippings, and another thing to search that in the entire Internet! (One can also reorganize or edit the notes to add in the keywords one is thinking of, to help with refinding.) I make heavy use of Evernote clipping and it is key to refinding my references.

  • Crawling websites: sometimes whole websites might be useful (example: “Darknet Market Archives (2013-2015)”).

    Useful tools to know about: wget, cURL, HTTrack; Firefox plugins: NoScript, uBlock origin, Live HTTP Headers, Bypass Paywalls, cookie exporting. Short of downloading a website, it might also be useful to pre-emptively archive it by using linkchecker to crawl it, compile a list of all external & internal links, and store them for processing by another archival program (see Archiving URLs for examples).

Web pages

With proper use of pre-emptive archiving tools like archiver-bot, fixing linkrot in one’s own pages is much easier, but that leaves other references. Searching for lost web pages is similar to searching for papers:

  • if the page title is given, search for the title.

    It is a good idea to include page titles in one’s own pages, as well as the URL, to help with future searches, since the URL may be meaningless gibberish on its own, and pre-emptive archiving can fail. HTML supports both alt and title parameters in link tags, and, in cases where displaying a title is not desirable (because the link is being used inline as part of normal hypertextual writing), titles can be included cleanly in Markdown documents like this: [inline text description](URL "Title").

  • check the URL: is it weird or filling with trailing garbage like ?rss=1 or ?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+blogspot%2FgJZg+%28Google+AI+Blog%29? Or a variant domain, like a mobile.foo.com/m.foo.com/foo.com/amp/ URL?

  • restrict G search to the original domain with site:, or to related domains

  • restrict G search to the original date-range/years

  • try a different search engine: corpuses can vary, and in some cases G tries to be too smart for its own good when you need a literal search; DuckDuckGo and Bing are usable alternatives (especially if one of DuckDuckGo’s ‘bang’ special searches is what one needs)

  • if nowhere on the clearnet, try the Internet Archive (IA) or the Memento meta-archive search engine:

    IA is the default backup for a dead URL. If IA doesn’t Just Work, there may be other versions in it:

    • did the IA ‘redirect’ you to an error page? Kill the redirect and check the earliest stored version. Did the page initially load but then error out/redirect? Disable JS with NoScript and reload.

    • IA lets you list all URLs with any archived versions, by searching for URL/*; the list of available URLs may reveal an alternate newer/older URL. It can also be useful to filter by filetype or substring. For example, one might list all URLs in a domain, and if the list is too long and filled with garbage URLs, then using the “Filter results” incremental-search widget to search for “uploads/” on a WordPress blog.5

      Screenshot of an oft-overlooked feature of the Internet Archive: displaying all available/archived URLs for a specific domain, filtered down to a subset matching a string like *uploads/*.
      Screenshot of an oft-overlooked feature of the Internet Archive: displaying all available/archived URLs for a specific domain, filtered down to a subset matching a string like *uploads/*.
      • wayback_machine_downloader (not to be confused with the internetarchive Python package which provides a CLI interface to uploading files) is a Ruby tool which lets you download whole domains from IA, which can be useful for running a local fulltext search using regexps (a good grep query is often enough), in cases where just looking at the URLs via URL/* is not helpful. (An alternative which might work is websitedownloader.io.)

      Example:

    • did the domain change, eg from www.foo.com to foo.com or www.foo.org? Entirely different as far as IA is concerned.

    • is this a Blogspot blog? Blogspot is uniquely horrible in that it has versions of each blog for every country domain: a foo.blogspot.com blog could be under any of foo.blogspot.de, foo.blogspot.au, foo.blogspot.hk, foo.blogspot.jp6

    • did the website provide RSS feeds?

      A little known fact is that Google Reader (GR; October 2005-July 2013) stored all RSS items it crawled, so if a website’s RSS feed was configured to include full items, the RSS feed history was an alternate mirror of the whole website, and since GR never removed RSS items, it was possible to retrieve pages or whole websites from it. GR has since closed down, sadly, but before it closed, Archive Team downloaded a large fraction of GR’s historical RSS feeds, and those archives are now hosted on IA. The catch is that they are stored in mega-WARCs, which, for all their archival virtues, are not the most user-friendly format. The raw GR mega-WARCs are difficult enough to work with that I defer an example to the appendix.

  • archive.today: an IA-like mirror

  • any local archives, such as those made with my archiver-bot

  • Google Cache (GC): GC works, sometimes, but the copies are usually the worst around, ephemeral & cannot be relied upon. Google also appears to have been steadily deprecating GC over the years, as GC shows up less & less in search results.

Books

Digital

E-books are rarer and harder to get than papers, although the situation has improved vastly since the early 2000s. To search for books online:

Physical

Books are something of a double-edged sword compared to papers/theses. On the one hand, books are much more often unavailable online, and must be bought offline, but at least you almost always can buy used books offline without much trouble (and often for <$10 total); on the other hand, while paper/theses are often available online, when one is not unavailable, it’s usually very unavailable, and you’re stuck (unless you have a university ILL department backing you up or are willing to travel to the few or only universities with paper or microfilm copies).

Purchasing from used book sellers:

  • Google Books is a good starting point for seller links; if buying from a marketplace like AbeBooks/Amazon/Barnes & Noble, it’s worth searching the seller to see if they have their own website, which is potentially much cheaper. They may also have multiple editions in stock.

  • Sellers:

    • bad: eBay & Amazon, due to high-minimum-order+S&H, but can be useful in providing metadata like page count or ISBN or variations on the title

    • good: AbeBooks, Thrift Books, Better World Books, B&N, Discover Books.

      Note: on AbeBooks, international orders can be useful (especially for behavioral genetics or psychology books) but be careful of international orders with your credit card—many debit/credit cards will fail and trigger a fraud alert, and PayPal is not accepted.

  • if a book is not available or too expensive, set price watches: AbeBooks supports email alerts on stored searches, and Amazon can be monitored via CamelCamelCamel (remember the CCC price alert you want is on the used third-party category, as new books are more expensive, less available, and unnecessary).

Scanning:

  • destructive vs non-destructive: destructively debinding books with a razor or guillotine cutter works much better & is much less time-consuming than spreading them on a flatbed scanner to scan one-by-one8, because it allows use of a sheet-fed scanner instead, which is easily 5x faster and will give higher-quality scans (because the sheets will be flat, scanned edge-to-edge, and much more closely aligned).

  • Tools:

    • For simple debinding of a few books a year, an X-acto knife/razor is good (avoid the ‘triangle’ blades, get curved blades intended for large cuts instead of detail work)

    • once you start doing more than one a month, it’s time to upgrade to a guillotine blade paper cutter (a fancier swinging-arm paper cutter, which uses a two-joint system to clamp down and cut uniformly).

      A guillotine blade can cut chunks of 200 pages easily without much slippage, so for books with more pages, I use both: an X-acto to cut along the spine and turn it into several 200-page chunks for the guillotine cutter.

    • at some point, it may make sense to switch to a scanning service like 1DollarScan (1DS has acceptable quality for the black-white scans I have used them for thus far, but watch out for their nickel-and-diming fees for OCR or “setting the PDF title”; these can be done in no time yourself using gscan2pdf/exiftool/ocrmypdf and will save a lot of money as they, amazingly, bill by 100-page units). Books can be sent directly to 1DS, reducing logistical hassles.

  • after scanning, crop/threshold/OCR/add metadata

  • Adding metadata: same principles as papers. While more elaborate metadata can be added, like bookmarks, I have not experimented with those yet.

  • Saving files:

    In the past, I used DjVu for documents I produce myself, as it produces much smaller scans than gscan2pdf’s default PDF settings due to a buggy Perl library (at least half the size, sometimes one-tenth the size), making them more easily hosted & a superior browsing experience.

    The downsides of DjVu are that not all PDF viewers can handle DjVu files, and it appears that G/GS ignore all DjVu files (despite the format being 20 years old), rendering them completely unfindable online. In addition, DjVu is an increasingly obscure format and has, for example, been dropped by the IA as of 2016. The former is a relatively small issue, but the latter is fatal—being consigned to oblivion by search engines largely defeats the point of scanning! (“If it’s not in Google, it doesn’t exist.”) Hence, despite being a worse format, I now recommend PDF and have stopped using DjVu for new scans9 and have converted my old DjVu files to PDF.

  • Uploading: to LibGen, usually. For backups, filelockers like Dropbox, Mega, MediaFire, or Google Drive are good. I usually upload 3 copies including LG. I rotate accounts once a year, to avoid putting too many files into a single account.

  • Hosting: hosting papers is easy but books come with risk:

    Books can be dangerous; in deciding whether to host a book, my rule of thumb is host only books pre-2000 and which do not have Kindle editions or other signs of active exploitation and is effectively an ‘orphan work’.

    As of 11 December 2018, hosting 3763 files over 8 years (very roughly, assuming linear growth, <5.5 million document-days of hosting: ), I’ve received 3 takedown orders: a behavioral genetics textbook (2013), The Handbook of Psychopathy (2005), and a recent meta-analysis paper (Roberts et al 2016). I broke my rule of thumb to host the 2 books (my mistake), which leaves only the 1 paper, which I think was a fluke. So, as long as one avoids relatively recent books, the risk should be minimal.

See also

Appendix

Searching the Google Reader archives

One way to ‘undelete’ a blog or website is to use Google Reader (GR).

GR crawled regularly almost all blogs’ RSS feeds; RSS feeds often contain the fulltext of articles. If a blog author writes an article, the fulltext is included in the RSS feed, GR downloads it, and then the author changes their mind and edits or deletes it, GR would redownload the new version but it would continue to show the version the old version as well (you would see two versions, chronologically). If the author blogged regularly and so GR had learned to check regularly, it could hypothetically grab many different edited versions, even, not just ones with weeks or months in between. Assuming that GR did not, as it sometimes did for inscrutable reasons, stop displaying the historical archives and only showed the last 90 days or so to readers; I was never able to figure out why this happened or if indeed it really did happen and was not some sort of UI problem. Regardless, if all went well, this let you undelete an article, albeit perhaps with messed up formatting or something. Sadly, GR was closed back in 2013 and you cannot simply log in and look for blogs.

However, before it was closed, Archive Team launched a major effort to download as much of GR as possible. So in that dump, there may be archives of all of a random blog’s posts. Specifically: if a GR user subscribed to it; if Archive Team knew about it; if they requested it in time before closure; and if GR did keep full archives stretching back to the first posting.

Downside: the Archive Team dump is not in an easily browsed format, and merely figuring out what it might have is difficult. In fact, it’s so difficult that before researching Craig Wright in November-December 2015, I never had an urgent enough reason to figure out how to get anything out of it before, and I’m not sure I’ve ever seen anyone actually use it before; Archive Team takes the attitude that it’s better to preserve the data somehow and let posterity worry about using it. (There is a site which claimed to be a frontend to the dump but when I tried to use it, it was broken & still is in December 2018.)

Extracting

The 9TB of data is stored in ~69 opaque compressed WARC archives. 9tb is a bit much to download and uncompress to look for one or two files, so to find out which WARC you need, you have to download the ~69 CDX indexes which record the contents of their respective WARC, and search them for the URLs you are interested in. (They are plain text so you can just grep them.)

Locations

In this example, we will look at the main blog of Craig Wright, gse-compliance.blogspot.com. (Another blog, security-doctor.blogspot.com, appears to have been too obscure to be crawled by GR.)

To locate the WARC with the Wright RSS feeds, download the the master index. To search:

for file in *.gz; do echo $file; zcat $file | fgrep -e 'gse-compliance' -e 'security-doctor'; done
# com,google/reader/api/0/stream/contents/feed/http:/gse-compliance.blogspot.com/atom.xml?client=\
# archiveteam&comments=true&hl=en&likes=true&n=1000&r=n 20130602001238 https://www.google.com/reader/\
# api/0/stream/contents/feed/http%3A%2F%2Fgse-compliance.blogspot.com%2Fatom.xml?r=n&n=1000&hl=en&\
# likes=true&comments=true&client=ArchiveTeam unk - 4GZ4KXJISATWOFEZXMNB4Q5L3JVVPJPM - - 1316181\
# 19808229791 archiveteam_greader_20130604001315/greader_20130604001315.megawarc.warc.gz
# com,google/reader/api/0/stream/contents/feed/http:/gse-compliance.blogspot.com/feeds/posts/default?\
# alt=rss?client=archiveteam&comments=true&hl=en&likes=true&n=1000&r=n 20130602001249 https://www.google.\
# com/reader/api/0/stream/contents/feed/http%3A%2F%2Fgse-compliance.blogspot.com%2Ffeeds%2Fposts%2Fdefault\
# %3Falt%3Drss?r=n&n=1000&hl=en&likes=true&comments=true&client=ArchiveTeam unk - HOYKQ63N2D6UJ4TOIXMOTUD4IY7MP5HM\
# - - 1326824 19810951910 archiveteam_greader_20130604001315/greader_20130604001315.megawarc.warc.gz
# com,google/reader/api/0/stream/contents/feed/http:/gse-compliance.blogspot.com/feeds/posts/default?\
# client=archiveteam&comments=true&hl=en&likes=true&n=1000&r=n 20130602001244 https://www.google.com/\
# reader/api/0/stream/contents/feed/http%3A%2F%2Fgse-compliance.blogspot.com%2Ffeeds%2Fposts%2Fdefault?\
# r=n&n=1000&hl=en&likes=true&comments=true&client=ArchiveTeam unk - XXISZYMRUZWD3L6WEEEQQ7KY7KA5BD2X - - \
# 1404934 19809546472 archiveteam_greader_20130604001315/greader_20130604001315.megawarc.warc.gz
# com,google/reader/api/0/stream/contents/feed/http:/gse-compliance.blogspot.com/rss.xml?client=archiveteam\
# &comments=true&hl=en&likes=true&n=1000&r=n 20130602001253 https://www.google.com/reader/api/0/stream/contents\
# /feed/http%3A%2F%2Fgse-compliance.blogspot.com%2Frss.xml?r=n&n=1000&hl=en&likes=true&comments=true\
# &client=ArchiveTeam text/html 404 AJSJWHNSRBYIASRYY544HJMKLDBBKRMO - - 9467 19812279226 \
# archiveteam_greader_20130604001315/greader_20130604001315.megawarc.warc.gz

Understanding the output: the format is defined by the first line, which then can be looked up:

  • the format string is: CDX N b a m s k r M S V g; which means here:

    • N: massaged url
    • b: date
    • a: original url
    • m: mime type of original document
    • s: response code
    • k: new style checksum
    • r: redirect
    • M: meta tags (AIF)
    • S: ?
    • V: compressed arc file offset
    • g: file name

Example:

Converts to:

  • massaged URL: (com,google)/reader/api/0/stream/contents/feed/ http:/gse-compliance.blogspot.com/atom.xml? client=archiveteam&comments=true&hl=en&likes=true&n=1000&r=n
  • date: 20130602001238
  • original URL: https://www.google.com/reader/api/0/stream/contents/feed/ http%3A%2F%2Fgse-compliance.blogspot.com%2Fatom.xml? r=n&n=1000&hl=en&likes=true&comments=true&client=ArchiveTeam
  • MIME type: unk [unknown?]
  • response code: - [none?]
  • new-style checksum: 4GZ4KXJISATWOFEZXMNB4Q5L3JVVPJPM
  • redirect: - [none?]
  • meta tags: - [none?]
  • S [? maybe length?]: 1316181
  • compressed arc file offset: 19808229791 (19,808,229,791; so somewhere around 19.8GB into the mega-WARG)
  • filename: archiveteam_greader_20130604001315/greader_20130604001315.megawarc.warc.gz

Knowing the offset theoretically makes it possible to extract directly from the IA copy without having to download and decompress the entire thing… The S & offsets for gse-compliance are:

  1. 1316181/19808229791
  2. 1326824/19810951910
  3. 1404934/19809546472
  4. 9467/19812279226

So we found hits pointing towards archiveteam_greader_20130604001315 & archiveteam_greader_20130614211457 which we then need to download (25GB each):

Once downloaded, how do we get the feeds? There are a number of hard-to-use and incomplete tools for working with giant WARCs; I contacted the original GR archiver, ivan, but that wasn’t too helpful.

warcat

I tried using warcat to unpack the entire WARC archive into individual files, and then delete everything which was not relevant:

But this was too slow, and crashed partway through before finishing.

Bug reports:

A more recent alternative library, which I haven’t tried, is warcio, which may be able to find the byte ranges & extract them.

dd

If we are feeling brave, we can use the offset and presumed length to have dd directly extract byte ranges:

Results

My dd extraction was successful, and the resulting HTML/RSS could then be browsed with a command like cat *.warc | fold --spaces -width=200 | less. They can probably also be converted to a local form and browsed, although they won’t include any of the site assets like images or CSS/JS, since the original RSS feed assumes you can load any references from the original website and didn’t do any kind of data-URI or mirroring (not, after all, having been intended for archive purposes in the first place…)


  1. For example, the info: operator is entirely useless. The link: operator, in almost a decade of me trying it once in a great while, has never returned remotely as many links to my website as Google Webmaster Tools returns for inbound links, and seems to have been disabled entirely at some point.↩︎

  2. WP is increasingly out of date and due to increasingly narrow policies about sourcing and preprints, so it’s not a good place to look for references. It is a good place to look for terminology, though.↩︎

  3. University ILL privileges are one of the most underrated fringe benefits of being a student, if you do any kind of research or hobbyist reading—you can request almost anything you can find in WorldCat, whether it’s an ultra-obscure book or a master’s thesis from 1950! Why wouldn’t you make regular use of it‽ Of things I miss from being a student, ILL is near the top.↩︎

  4. I advise prepending, like https://sci-hub.tw/https://journal.com instead of appending, like https://journal.com.sci-hub.tw/ because the former is slightly easier to type but more importantly, Sci-Hub does not have SSL certificates set up properly (I assume they’re missing a wildcard) and so appending the Sci-Hub domain will fail to work in many web browsers due to HTTPS errors! However, if prepended, it’ll always work correctly.↩︎

  5. To further illustrate this IA feature: if one was looking for Alex St. John’s “Judgment Day Continued…”, a 2013 account of organizing the wild 1996 Doom tournament thrown by Microsoft, but one didn’t have the URL handy, one could search the entire domain by going to https://web.archive.org/web/*/http://www.alexstjohn.com/* and using the filter with “judgment”, or if one at least remembered it was in 2013, one could narrow it down further to https://web.archive.org/web/*/http://www.alexstjohn.com/WP/2013/* and then filter or search by hand.↩︎

  6. If any Blogspot employee is reading this, for god’s sake stop this insanity!↩︎

  7. Uploading is not as hard as it may seem. There is a web interface (user/password: “genesis”/“upload”). Uploading large files can fail, so I usually use the FTP server: curl -T "$FILE" ftp://anonymous@ftp.libgen.io/upload/. ↩︎

  8. Although flatbed scanning is sometimes destructive too—I’ve cracked the spine of books while pressing them flat into a flatbed scanner.↩︎

  9. My workaround is to export from gscan2pdf as DjVu, which avoids the bug, then convert the DjVu files with ddjvu -format=pdf; this strips any OCR, so I add OCR with ocrmypdf and metadata with exiftool.↩︎