
ArchiveBox
Because modern websites are complicated and often rely on dynamic content, ArchiveBox archives the sites in several different formats beyond what public archiving services like Archive.org and Archive.is are capable of saving. ArchiveBox imports a list of URLs from stdin, remote url, or file, then adds the pages to a local archive folder using wget to create a browsable html clone, youtube-dl to extract media, and a full instance of Chrome headless for PDF, Screenshot, and DOM dumps, and more... Using multiple methods and the market-dominant browser to execute JS ensures we can save even the most complex, finicky websites in at least a few high-quality, long-term data formats.
Can import links from:
- Pocket, Pinboard, Instapaper
- RSS, XML, JSON, or plain text lists
- Browser history or bookmarks (Chrome, Firefox, Safari, IE, Opera, and more)
- Shaarli, Delicious, Reddit Saved Posts, Wallabag, Unmark.it, and any other text with links in it!
Can save these things for each site:
favicon.ico
favicon of the siteexample.com/page-name.html
wget clone of the site, with .html appended if not presentoutput.pdf
Printed PDF of site using headless chromescreenshot.png
1440x900 screenshot of site using headless chromeoutput.html
DOM Dump of the HTML after rendering using headless chromearchive.org.txt
A link to the saved site on archive.orgwarc/
for the html + gzipped warc file <timestamp>.gzmedia/
any mp4, mp3, subtitles, and metadata found using youtube-dlgit/
clone of any repository for github, bitbucket, or gitlab linksindex.html
&index.json
HTML and JSON index files containing metadata and details The archiving is additive, so you can schedule./archive
to run regularly and pull new links into the index. All the saved content is static and indexed with JSON files, so it lives forever & is easily parseable, it requires no always-running backend.