Rich Link Previews in Eleventy

Generating attractive link previews with the Eleventy static site generator, Nunjucks, and html-metadata

Jens-Fabian Goetzmann

--

A few months ago, I re-built my personal website using the static site generator Eleventy. While migrating over all of my blog posts, I realized that for some posts, I wanted to be able include rich link previews like the one below:

These link previews resemble the ones used in social networks like Twitter or Facebook, but also on publishing platforms like Medium and LinkedIn. In this post, I will walk through how to create them in Eleventy using Nunjucks as a templating language. I used Nunjucks because it supports asynchronous filters — most of the modules that support extracting metadata from websites work asynchronously. To extract metadata from websites, I used html-metadata because it supports all different kinds of metadata.

The complete resulting code for these link previews is available on GitHub.

Initial setup

Let’s get started by setting everything up, installing eleventy as well as html-metadata:

npm init -y
npm install --save @11ty/eleventy
npm install --save html-metadata

We will also start with a pretty straight forward setup in .eleventy.js:

Setting up the link preview function

Next, we set up the link preview as an asynchronous Nunjucks filter inside .eleventy.js. You can read more about asynchronous Nunjucks filters in the eleventy documentation and the Nunjucks documentation.

Asynchronous Nunjucks filters get passed a callback(err, res) function that the filter should call with err set to an error (or null if the filter executed successfully), and res set to the result of the filter. A very simple version of the filter, without any actual scraping, may look as follows:

This asynchronous filter can now be called from a simple index.md file. Note that we are calling the linkPreview filter followed by the safe filter, which avoids the output of the linkPreview filter being escaped (which would break the HTML, of course):

# Rich link previews in Eleventy
This is a demonstration of rich link previews in Eleventy
{{"https://www.jefago.com/" | linkPreview | safe}}

You should now be able to create the page using npx eleventy or running the local test server using npx eleventy --serve.

Scraping link metadata

So far, the linkPreview function is just returning a plain <a>-link. Let's get html-metadata to work and actually create a link that contains some metadata. To do this, we change the linkPreview function as follows:

There’s a lot going on here, so let’s step through this one by one. The escape function merely makes sure that any metadata we've extracted doesn't contain HTML (so that we don't get any HTML injected in our page), by replacing the symbols <>"& with their respective HTML entity codes.

Inside of linkPreview, we define a helper function format that takes the metadata extracted using html-metadata and constructs an HTML snippet for the link preview. The function is defined inside of linkPreview mostly because it is a closure over the variable link which is local to the function linkPreview. In other words, we want to be able to access the link variable from inside the format function, and we can't pass it as a parameter (since the parameters of the function are defined by html-metadata).

You can see how the format function looks at the different types of metadata that html-metadata can extract, e.g. to define the variable title it looks for both metadata.openGraph.title and metadata.general.title, depending on whether or not the linked page contains OpenGraph information or not.

For the actual HTML generation, the snippet contains an inline SVG which is used as a fallback if the link contains no image. You can of course remove or replace that.

The actual call to HTML metadata happens in this very short piece of code. In case html-metadata doesn’t return any metadata, the callback is called with the error set to No metadata.

In order for the generated HTML snippet to look like anything, you also need to create and include a CSS file, like the following bare bones template. In order for it to be included, you will also need to create and include a layout in the index.md file—refer to the eleventy documentation for details.

Caching link metadata

While the above approach works perfectly fine, it will scrape metadata from the link every time the site is built. This increases build time (especially if you have a lot of rich link previews), and also makes builds more fickle if one of the linked pages is unavailable at build time. It is therefore a good idea to cache the results.

The idea here is actually very simple: We will just store the results of the scrape call as a JSON file. You can even check those files into your version control system, and wherever you then check out and build the site, the cached version of the link metadata will be used.

We will use some additional modules for the caching, so update the beginning of your .eleventy.js file as follows:

We will store the cached metadata in files in the directory _files, so create that first. To implement the caching, we extend the linkPreview function as follows:

Again, quite a bit going on here. Let’s step through it one by one. In the first two lines, we’re creating an SHA1 hash of the link URL, which we’re going to use (in hex encoding) as the file name. SHA1 hashing ensures that we don’t have any troubles with slashes and other special characters in the link, which we would have if we would just use the URL as the file name. On the other hand, having two URLs produce the same SHA1 hash is close to impossible. (Note: While SHA1 isn’t safe anymore as a password hashing algorithm, for this purpose, it’s completely fine.) Another thing that’s worth noting is that this means that small differences in the URL will lead to two different files being created. Let’s say for instance that you are using one link preview to https://www.jefago.com and another to https://www.jefago.com/—although they point to the same web page, their hash would be different.

Next, we’re checking whether the file with that name already exists, using the synchronous function fs.existsSync(). It would also be possible to use the asynchronous function here instead, since the filter is already asynchronous. If the file exists, we try to read it. If reading fails, we pass an error to the callback. If it succeeds, we simply parse the data as JSON and pass it to the format() function—this is possible since the data we are storing in the file is the just JSON'ified result of the scrape() function.

In case the file doesn’t exist, we basically have the scrape() call we had before. The only difference is that we also write the metadata to the file, using JSON.stringify. In case writing the file is unsuccesful, we simply ignore it (since we can always try again next time). This also means that if you didn't create a directory called _links, saving the file will probably fail silently.

Lazy loading preview images

The last possible improvement that I implemented is lazy loading the preview images. Otherwise, a page with a lot of link previews can get very slow to load. For that, I used the small lazy loading library bLazy, which is loaded and implemented in an HTML template like this:

This loads and initalizes bLazy, looking for any images matching the CSS-style selector .lp img (and on success, sets the image background to white so the gray background doesn't show through any transparent images). In order for this to work, we just have to change one line in the format() function:

This line changes the <img> element so that the actual source is a tiny base64-encoded inline transparent png, and the actual source is set as the attriute data-src, which bLazy uses to lazy load the image.

That’s all! I hope this was helpful. As mentioned above, the full code is available on GitHub.

--

--

Jens-Fabian Goetzmann

Head of Product at RevenueCat; previously at 8fit, Yammer, BCG.