#

In 

# :keyset


:keyset is the fastest paginator for SQL collections.

Controller
@pagy, @records = pagy(:keyset, set, **options)
<!-- The only supported helper -->
<%== @pagy.next_tag(text: 'Next page &gt;') %>

Try it now!

There are a few peculiar aspects of the keyset pagination technique that you might not be familiar with. Here is a concise list:

Term Description
offset pagination The technique to fetch each page by incrementing the offset from the collection start.
It requires two queries per page (or only one if you use countless): it's slow toward the end of big tables.
It can be used for a rich frontend: it's the classic pagy pagination.
keyset pagination The technique to fetch the next page starting after the latest fetched record in a uniquely ordered collection.
It requires only one query per page (without OFFSET). if properly indexed, it's the fastest technique, regardless of the table size and position. Supports only infinite pagination, with no other frontend helpers.
keynav pagination The pagy exclusive technique to use keyset pagination, providing nearly complete UI support. The fasted technique with UI capabilities.
uniquely ordered The property of a set, when the concatenation of the values of the ordered columns is unique for each record. It is similar to a composite primary key for the ordered table, but dynamically based on the keyset columns.
set The uniquely ordered ActiveRecord::Relation or Sequel::Dataset collection to paginate.
keyset The hash of column/order pairs. Pagy extracts it from the order of the set.
keyset attributes The hash of attributes of the keyset columns of a record.
cutoff The value that identifies where the page ends, and the next one begins. It is encoded as a Base64 URL-safe string.
page The current page, i.e. the page of records beginning after the cutoff of the previous page. Also the :page option, which is set to the cutoff of the previous page
next The next page, i.e. the page of records beginning after the cutoff. Also the cutoff value retured by the next method.

Ensure that your set is uniquely ordered, and that your tables have the appropriate indexes.

Depending on your order requirements, here is how you set it up:

  • keyset: {...}
    • Set it only to force the keyset hash of column/order pairs. (It is set automatically from the set order)
  • tuple_comparison: true
    • Enable the tuple comparison e.g. (brand, id) > (:brand, :id). It works only with the same direction order, hence, it's ignored for mixed order. Check how your DB supports it (your keyset should include only NOT NULL columns).
  • pre_serialize: serialize
    • Set it to a lambda that receives the keyset_attributes hash. Modify this hash directly to customize the serialization of specific values (e.g., to preserve Time object precision). The lambda's return value is ignored.
      serialize = lambda do |attributes|
        # Convert it to a string matching the stored value/format in SQLite DB
        attributes[:created_at] = attributes[:created_at].strftime('%F %T.%6N')
      end

See also Common Options

  • records
    • The Array of fetched records for the current page.

See also Common Readers


  • You pass an uniquely ordered set and pagy pulls the :limit of records of the first page.
  • You request the next URL, which has the page query string param set to the cutoff of the current page.
  • At each request, the new page is decoded into arguments that are coupled with a where filter query, and a :limit of new records is retrieved.
  • The collection ends when pagy.next.nil?.

The cutoff of a page is the value that identifies where the page has ended, and the next one begins.

Let's consider an example of a simple set of 29 records, with an id column populated by character keys, and its order is: order(:id).

Assuming a LIMIT of 10, the "first page" will just include the first 10 records in the set: no cutoff value is known so far...

                  │ first page (10)  >│ rest (19)                          >│
beginning of set >[· · · · · · · · · ·]· · · · · · · · · · · · · · · · · · ·]< end of set

At this point, it's the exact same first page pulled with OFFSET pagination, however, we don't want to use OFFSET to get the records after the first 10: that would be slow in a big table, so we need a way to identify the beginning of the next page without COUNTing the records (i.e., the whole point we want to avoid for performance).

So we read the id of the last one, which is the value X in our example... and that is the cutoff value of the first page. It can be described like: "the point up to the value X in the id column".

Notice that this is not like saying "up to the record X". It's important to understand that a cutoff refers just to a cutoff value of a column (or the values of multiple column, in case of multi-columns keysets).

Indeed, that very record could be deleted right after we read it, and our cutoff-X will still be the valid truth that we paginated the set up to the "X" value, cutting any further record off the page...

                  │ first page (10)  >│ second page (10) >│ rest (9)       >│
beginning of set >[· · · · · · · · · X]· · · · · · · · · ·]· · · · · · · · ·]< end of set
                                      â–²
                                   cutoff-X

For getting the next page of records (i.e. the "second page") we pull the next 10 records AFTER the cutoff-X. Again, we read the id of the last one, which is Y: so we have our new cutoff-Y, which is the end of the current page, and the next will go AFTER it...

                  │ first page (10)  >│ second page (10) >│ last page (9)  >│
beginning of set >[· · · · · · · · · X]· · · · · · · · · Y]· · · · · · · · ·]< end of set
                                      â–²                   â–²
                                   cutoff-X            cutoff-Y

When we pull the next page from the cutoff-Y, we find only the remaining 9 records, which means that it's the "last page", which naturally ends with the end of the set, so it doesn't have any cutoff value to separate it from further records.


# 1. Records may repeat or be missing from successive pages


# 2. The order is OK, but the DB is still slow