Migrating from V4
One of the main aims for PostGraphile V5 was to replace the "lookahead engine"
with something better: easier to reason about, easier to maintain and extend,
easier for users to interact with, and more capable of handling GraphQL's
present and future feature set (V4's lookahead doesn't even support interfaces
and unions, let alone @stream
, @defer
, client controlled nullability, and
everything else that's coming down the GraphQL pipeline).
We didn't set out to do so, but ultimately this ended up with us writing our own GraphQL runtime, called Grafast. This runtime is built around a carefully engineered planning phase followed by an highly optimized execution phase. This much more powerful system allowed us to generate much more efficient SQL queries, and to execute requests much faster than in V4, which was already pretty fast!
However, Grafast was completely different (not similar in the slightest) to V4's lookahead engine, and that lookahead engine was the beating heart of V4. Replacing it required us to rebuild the entire stack from scratch on top of these new foundations.
Since we had to rebuild everything anyway, we decided to fix a large number of
issues that had been bugging us for a while... The plugin system has been
consolidated and transformed, the configuration system has been consolidated
and transformed, some of the smart tags (@omit
, @simpleCollections
, etc)
have been replaced with behaviors (@behavior
), all the plugins have been
rewritten or replaced, the defaults have evolved, ... but I'm getting ahead of
myself.
If you're reading this, you probably want to know how to take your PostGraphile V4 project and run it in PostGraphile V5 instead with the minimal of fuss. One of the reasons that V5 took so long, other than that we invented an entirely new set of technologies and then rebuilt the entire Graphile GraphQL ecosystem from scratch on top of them, was the amount of effort that we put into trying to minimize the amount of effort it will take you to move to V5.
Note that there are subpages dedicated to particular plugin/plugin generators that need their own V5 migration strategy.
Let's get started.
Basic configuration
PostGraphile V5 requires a "preset" to be specified; this allows users to start with the best settings for their intended use case without having to read pages of documentation first. It also allows us to evolve the defaults over time without breaking existing users schemas - ultimately meaning we no longer need a "recommended options" section in the docs.
To achieve this, we've introduced a new module: graphile-config
. This module
takes care of the common concerns relating to configuration, in particular:
presets, plugins and options. You store your config into a graphile.config.js
(or .ts
, .mjs
, .mts
) file and this can then be used from the command
line, library mode, or even schema-only mode - they all share the same
configuration.
makeV4Preset
To make the transition to this new system easier, we've built a makeV4Preset
preset factory for you that converts a number of the options that you are
familiar with from V4 into their V5 equivalents. To use it, feed your options
into its first argument, and combine it with postgraphile/presets/amber
(our
initial preset) to get a completed config:
import { PostGraphileAmberPreset } from "postgraphile/presets/amber";
import { makeV4Preset } from "postgraphile/presets/v4";
import { makePgService } from "postgraphile/adaptors/pg";
/** @type {GraphileConfig.Preset} */
const preset = {
extends: [
PostGraphileAmberPreset,
makeV4Preset({
/* PUT YOUR V4 OPTIONS HERE, e.g.: */
simpleCollections: "both",
jwtPgTypeIdentifier: '"b"."jwt_token"',
}),
],
pgServices: [
makePgService({
connectionString: process.env.DATABASE_URL,
schemas: ["app_public"],
}),
],
};
export default preset;
Larger example
Here's a larger documented preset with more details:
import "graphile-config";
import { PostGraphileAmberPreset } from "postgraphile/presets/amber";
import { makeV4Preset } from "postgraphile/presets/v4";
// Use the 'pg' module to connect to the database
import { makePgService } from "postgraphile/adaptors/pg";
/** @type {GraphileConfig.Preset} */
const preset = {
extends: [
// The initial PostGraphile V5 preset
PostGraphileAmberPreset,
// Change the options and add/remove plugins based on your V4 configuration:
makeV4Preset({
/* PUT YOUR V4 OPTIONS HERE, e.g.: */
simpleCollections: "both",
jwtPgTypeIdentifier: '"b"."jwt_token"',
appendPlugins: [
/*...*/
],
}),
// Note some plugins are now "presets", e.g.
// `@graphile/simplify-inflection`, those should be listed here instead of `appendPlugins`
],
plugins: [
/*
* If you were using `pluginHook` before, the relevant plugins will need
* listing here instead. You can also move the `appendPlugins` list here
* for consistency if you like.
*/
],
/*
* PostgreSQL database configuration.
*
* If you're using the CLI you can skip this and use the `-c` and `-s`
* options instead, but we advise configuring it here so all the modes of
* running PostGraphile can share it.
*/
pgServices: [
makePgService({
// Database connection string:
connectionString: process.env.DATABASE_URL,
// List of schemas to expose:
schemas: ["app_public"],
// Superuser connection string, only needed if you're using watch mode:
// superuserConnectionString: process.env.SUPERUSER_DATABASE_URL,
}),
],
};
export default preset;
Note that the appendPlugins
/prependPlugins
/skipPlugins
options require
V5-compatible plugins, and pluginHook
is no longer supported - please use the
plugins
configuration key instead.
Note also that PgNodeAliasPostGraphile
(plugin) no longer exists, so instead
of skipping it with skipPlugins
you should add the following to your preset:
const preset = {
// ...
schema: {
// ...
pgV4UseTableNameForNodeIdentifier: false,
},
};
Not all of the community's V4 plugins have been ported to V5 at time of writing, but with your help hopefully we can fix that! See the community plugins page for details of which plugins have been ported.
additionalGraphQLContextFromRequest
You can provide this to makeV4Preset({ additionalGraphQLContextFromRequest })
, so no specific action is required.
If you want to do this the V5 way then the replacement is the 'grafast.context' option; please see configuration - context.
const preset = {
//...
grafast: {
context(ctx) {
// Other servers/transports add different details to `ctx`.
const { req, res } = ctx.node ?? {};
return additionalGraphQLContextFromRequest(req, res);
},
},
};
pgSettings
You can provide this to makeV4Preset({ pgSettings })
, so no specific action is required.
If you want to do this the V5 way then the replacement is to
return a pgSettings
key from the GraphQL context returned from your
'grafast.context' configuration, for more details see configuration -
context.
const preset = {
//...
grafast: {
context(ctx) {
// Other servers/transports add different details to `ctx`.
const { req } = ctx.node ?? {};
return {
pgSettings: pgSettings(req);
};
},
},
};
V4 options not (yet) supported
TODO: this isn't fully documented, please send PRs with any other options you notice aren't supported!
- legacyRelations - this has explicitly been removed from V5, see the Breaking GraphQL schema changes section below.
- disableQueryLog - V5 will integrate
@graphile/logger
but this work hasn't started yet - operationMessages - this originates from the @graphile/operation-hooks plugin which has not yet been ported to V5
Breaking GraphQL schema changes
We've done our best to maintain as much compatibility with a V4 GraphQL schema
as possible when you use makeV4Preset()
, but some breaking changes persist
(we'd argue they're for the better!). Of course if any of these are critical
issues to you they can all be solved by writing a plugin to shape the GraphQL
API how you need, but we suggest that if possible you accept these new changes.
- The long deprecated "legacy relations" are no longer generated (so the
legacyRelations
configuration parameter has been deleted) - Node IDs for bigint columns have changed (specifically numbers under
MAX_SAFE_INT
now have quote marks inside the encoding too) - In alignment with the Cursor connections specification, connection edges are nullable and cursors are not. It seems we implemented this the wrong way around in V4.
- For
fooEdge
fields on mutation payloads, theorderBy
argument is now non-nullable. It still has a default, so this will only affect you if you were explicitly passingnull
(which seems unlikely since it's not useful). - The
fooEdge
fields on mutation payloads now only exist if the table has a primary key. - If you disable
NodePlugin
then thedeletedFooId
field on mutation payloads now will not be added to the schema (it never should have been). - The
bytea
type is now natively supported, exposed asBase64EncodedBinary
Many of the above issues will not cause any problems for the vast majority of your operations - an engineer from our sponsor Netflix reported all 4,000 unique queries that their very large and complex V4 internal schema received in Feb 2023 validated successfully against their V5 schema - zero breakage! Despite the query compatibility, you may still need to migrate some types on the client side.
Other changes:
- The values generated for
cursor
differ. This is not deemed to be a breaking change, as these may change from one release to the next (even patch versions). - The behavior of
PageInfo
'shasPreviousPage
andhasNextPage
differs from V4. In V4 when paginating forwards (viafirst
andafter
arguments) thehasPreviousPage
field would run an additional subquery to determine if a previous page existed. The Cursor Connections Specification explicitly states you may do this if it's efficient to do so. Since adding a subquery isn't efficient, in V5 we follow the specification and always returnfalse
forhasPreviousPage
when paginating forwards. In general, if you have anafter
cursor then you can imply there's probably a previous page, so no need for the system to check. (The same applies tohasNextPage
when paginating backwards vialast
andbefore
arguments.) For clarity; when paginating forwards you can usehasNextPage
to determine if there is a next page (and should not requesthasPreviousPage
); when paginating backwards you should usehasPreviousPage
to determine if there is a previous page (and should not requesthasNextPage
). - In some circumstances the location where a
null
is returned or an error is thrown may differ slightly between V4 and V5; this will still respect the GraphQL schema though, of course. PgIndexBehaviorsPlugin
(the V5 equivalent of--no-ignore-indexes
) now only removes the backwards ("has many") relation when a relation is unindexed, since the forwards ("belongs to") relation is always indexed
Smart tag changes
Despite the following changes, if you're using makeV4Preset
then you should
not need to take any action - the established V4 behavior should be
automatically emulated.
- The
@uniqueKey
smart tag is no more; the V4 preset converts it to@unique ...|@behavior -single -update -delete
for you. - The
@omit
smart tag is no more; the V4 preset converts it to the relevant behavior for you. - The
@simpleCollections
smart tag is no more; the V4 preset converts it to the relevant behavior for you. - The
@foreignKey
smart tag must reference a unique constraint or primary key; the V4 preset automatically setspgFakeConstraintsAutofixForeignKeyUniqueness: true
which creates thisunique
for you (via a smart tag) and gives it the relevant behaviors so that it doesn't turn into more fields/arguments/etc.
The behavior system gives much finer grained control over which things should/should not be exposed in the schema, and how. If you use the V4 preset then we'll automatically translate the old smart tags into their behavior equivalents, so you shouldn't need to worry too much about this right now. We do advise that you migrate to the behavior system though, it's much more powerful.
Running
CLI
You can run V5 with the postgraphile
(or pgl
) command. This command
automatically reads the graphile.config.js
file; you should use that for
advanced configuration of the PostGraphile CLI rather than sending loads of CLI
flags - this enables us to keep the documentation tighter (documenting just one
set of options rather than 4), gives you auto-complete (if your editor supports
it) on the options, allows you to easily add comments on why the various
options are enabled, and makes it easier to transition between CLI, library and
schema-only modes.
Currently the main options are:
-P
- the preset to use, e.g.-P postgraphile/presets/amber
(or-P pgl/amber
if you're usingpgl
)-p
- the port to listen on; if not set then it will try and use5678
but will fallback to a system-assigned port so be sure to set this in production!-n
- the host to listen on; defaults to 'localhost' (you may want to change to '0.0.0.0' in Docker or similar environments)-c postgres://...
- your connection string (same as in V4)-S postgres://...
- your superuser connection string (replaces V4's-C
/--owner-connection
flag)-s app_public
- your list of PostgreSQL schemas (same as in V4)-e
- enable "explain" - this allows Ruru (formerly called PostGraphiQL) to render the operation plan and SQL queries that were executed-C
- the config file to load; if not set we'll try and loadgraphile.config.js
(or similar with different extensions) but won't raise an error if it doesn't exist
Example:
postgraphile -P postgraphile/presets/amber -c postgres:///my_db_name -s public -e
# or: pgl -P pgl/amber -c postgres:///my_db_name -s public -e
Library mode
Load your config from your graphile.config.js
file:
import preset from "./graphile.config.js";
Then feed your preset into the
postgraphile
function to get a PostGraphile instance (pgl
):
const pgl = postgraphile(preset);
In V4 this would have been a middleware, but in V5 we let Grafserv handle
the webserver portions (rather than PostGraphile having its own internal
server), this allows us to provide deep integration with the various JS
webservers (much better than we had in V4!). This has, however, necessitated a
different approach, so instead of a middleware the result of calling
postgraphile()
is a PostGraphile instance which provides a collection of
helper methods.
We'll be using the pgl.createServ()
helper in a moment, but first load the
correct Grafserv adaptor for the JS server that you're using:
import { grafserv } from "grafserv/node";
// OR: import { grafserv } from "grafserv/express/v4";
// OR: import { grafserv } from "grafserv/koa/v2";
// OR: import { grafserv } from "grafserv/fastify/v4";
// etc
Then feed this into pgl.createServ
to get a serv
Grafserv instance:
const serv = pgl.createServ(grafserv);
Finally mount the serv
instance into your server of choice using the API your
chosen Grafserv adaptor recommends (typically serv.addTo(...)
):
import { createServer } from "node:http";
const server = createServer();
server.on("error", (e) => console.error(e));
serv.addTo(server);
server.listen(preset.grafserv?.port ?? 5678);
Let's pull this all together to see an example using the Node built-in HTTP server:
import preset from "./graphile.config.js";
import { postgraphile } from "postgraphile";
import { grafserv } from "grafserv/node"; // Adaptor for Node's HTTP server
import { createServer } from "node:http";
// Create an HTTP server (with error handler)
const server = createServer();
server.on("error", (e) => console.error(e));
// Create a PostGraphile instance (pgl)
const pgl = postgraphile(preset);
// Create a Grafserv (server adaptor) instance for this PostGraphile instance
const serv = pgl.createServ(grafserv);
// Attach a request handler to the server
serv.addTo(server);
// Start the server
server.listen(preset.grafserv?.port ?? 5678);
Node.js' http.createServer()
does not need to be passed a request handler -
you can attach one later via server.on('request', handler)
which is exactly
what we do behind the scenes in the example above. The reason for this approach
is that it gives us a chance to also add a server.on('upgrade', ...)
handler
for dealing with websockets if preset.grafserv.websockets
is true
.
The above code might differ significantly depending on which webserver
framework you're using. In V4 we tried to support as many servers as possible
by reducing everything to the lowest common denominator: the Node HTTP req
and res
objects. This made it easy to add the middleware, but made it
challenging for you to deeply integrate with the various middleware and
capabilities in your webserver of choice. In V5, we have created an independent
abstraction over HTTP requests which Grafserv handles, and then we provide
different adaptors (or you can write your own!) to convert back and forth
between your webserver framework and our internal request abstractions.
Please see the Grafserv documentation for details.
Schema only mode
createPostGraphileSchema
is now just makeSchema
(and can be imported from
postgraphile
or graphile-build
- your choice). It only accepts one argument
now - the preset that you'd load from your graphile.config.js
(or similar)
file:
import { makeSchema } from "postgraphile";
import preset from "./graphile.config.js";
const { schema, resolvedPreset } = await makeSchema(preset);
There is no need for withPostGraphileContext
any more; the context no longer
has a lifecycle that needs to be carefully managed. Just run the query through
grafast
as you normally would through graphql
.
The context
will still need to be generated however. The easy option is to
have this done automatically by passing the resolvedPreset
and a
requestContext
object as part of the args to your grafast
call:
import { grafast } from "grafast";
import { makeSchema } from "postgraphile";
import preset from "./graphile.config.js";
const { schema, resolvedPreset } = await makeSchema(preset);
const args = {
schema,
resolvedPreset,
requestContext: {},
source: /* GraphQL */ `
query MyQuery {
__typename
}
`,
};
const result = await grafast(args);
console.dir(result);
Alternatively, you can use the hookArgs
method to hook the execution args and
attach all the relevant context data to it based on your preset and its
plugins. This is somewhat more involved, please see the schema only
usage documentation for that.
Server
PostGraphile's HTTP server has been replaced with Grafserv, our new ultra-fast general purpose Grafast-powered GraphQL server. Currently this doesn't support the same hooks that V4's server did, but we can certainly extend the hooks support over time. If there's a particular hook you need, please reach out.
GraphiQL
Our beloved PostGraphiQL has been transformed into a more modern general purpose GraphiQL called Ruru. This still integrates into the server, but you can also run it directly from the command line now too! It doesn't have a plugin system yet, but it will 😁
You can try ruru out against any GraphQL API (not only those made by PostGraphile) via:
npx ruru@beta -PSe http://localhost:5678/graphql
Plugins
The plugin architecture has been transformed: whereas plugins previously were
functions, they are now declarative objects. For example a plugin that adds the
Query.four
field might have looked like this in V4:
// Version 4 example; will not work with V5!
module.exports = (builder) => {
builder.hook("GraphQLObjectType:fields", (fields, build, context) => {
const {
graphql: { GraphQLInt },
} = build;
const { Self } = context;
if (Self.name !== "Query") return fields;
return build.extend(
fields,
{
four: {
type: GraphQLInt,
resolve() {
return 4;
},
},
},
"Adding Query.four",
);
});
};
In V5 the equivalent would be:
const { constant } = require("grafast");
module.exports = {
name: "AddQueryFourFieldPlugin",
version: "0.0.0",
schema: {
hooks: {
GraphQLObjectType_fields(fields, build, context) {
const {
graphql: { GraphQLInt },
} = build;
const { Self } = context;
if (Self.name !== "Query") return fields;
return build.extend(
fields,
{
four: {
type: GraphQLInt,
plan() {
return constant(4);
},
},
},
"Adding Query.four",
);
},
},
},
};
The callback itself is very similar, the hook name has been renamed to use
underscores instead of colons (GraphQLObjectType:fields
->
GraphQLObjectType_fields
), and rather than explicitly calling a hook
function to register it, it's implicitly registered by being part of the plugin
object.
This new structure allows the system to learn more about the plugin before actually running it, and can improve the debugging messages when things go wrong.
Keep in mind that we no longer have the lookahead system so
addArgDataGenerator
and its ilk no longer exist, instead we declare a
Grafast applyPlan
or similar plan resolver.
Plans, not resolvers
The new system uses Grafast which has a plan based system, so
rather than writing traditional GraphQL resolvers for each field, you will write
Grafast plan resolvers. This makes the system a lot more powerful and is a
lot more intuitive than our lookahead system once you've spent a little time
learning it. It also completely removes the need for old awkward-to-use
directives such as @pgQuery
and @requires
! (See
makeExtendSchemaPlugin migration for details on
migrating these directives.)
Introspection
build.pgIntrospectionResultsByKind
is no more; instead we have a new library
pg-introspection that takes care of introspection and has parts of the
PostgreSQL documentation built in via TypeScript hints.
The schema build process has been split into two phases:
gather
: looks at the introspection results and uses them to generate resources, codecs, relations and behaviors (the four building blocks)schema
: looks at these resources, codecs, relations and behaviors and generates a GraphQL schema from them
Generally introspection data is not available during the schema
phase; which
also means that you can manually write your own
resources/codecs/relations/behaviors and have a GraphQL schema autogenerated by
them independent of what your PostgreSQL schema actually is!
Introspection cache
There is no introspection cache any more, so no --read-cache
, --write-cache
,
readCache
or writeCache
. Instead you can build the GraphQL schema and then
export it as executable code using
graphile-export
. In production you simply run this exported
code - there's no need for database introspection, schema building plugins,
etc - and you instantly get your schema without the complexities (or
dependencies!) required in building it dynamically.
buildSchema
buildSchema
is now synchronous - you need to run the asynchronous gather phase
first. The arguments have also changed: first is the preset (this encompasses
the list of plugins which was previously the first argument, and any settings
which were previously the second argument), second comes the result of gather
and finally comes the shared object which contains inflection.
import { resolvePreset } from "graphile-config";
import { buildInflection, gather, buildSchema } from "graphile-build";
import preset from "./graphile.config.js";
const resolvedPreset = resolvePreset(preset);
const shared = { inflection: buildInflection(resolvedPreset) };
const input = await gather(resolvedPreset, shared);
const schema = buildSchema(resolvedPreset, input, shared);
postgraphile-core
postgraphile-core
is no more (look ma, I'm a poet!); here's how to replace
createPostGraphileSchema
:
Before:
import { createPostGraphileSchema } from "postgraphile-core";
// ...
const schema = await createPostGraphileSchema(DATABASE_URL, "public", {
// options
});
After:
import { makeSchema } from "postgraphile";
import preset from "./graphile.config.js";
const { schema, resolvedPreset } = await makeSchema(preset);
Module landscape
In PostGraphile V4 the main modules you'd deal with were:
postgraphile
- orchestration layer and CLI/server/middleware for our GraphQL schema, pluspluginHook
"server plugins" functionality, and the home of "PostGraphiQL" our embedded PostGraphile-flavoured GraphiQL.postgraphile-core
- integration layer between thepostgraphile
module and thegraphile-build
system that builds our GraphQL schema.graphile-build
- contains the schema plugin system, ability to build a GraphQL schema from plugins, basic plugins for all GraphQL schemas, and of course the complex lookahead system. Nothing specific to databases at all.graphile-build-pg
- a collection of plugins forgraphile-build
that teach it about PostgreSQL databases, including introspecting the database and generating all the GraphQL types/fields/etc and their resolvers and look-ahead informationpg-sql2
- build SQL via template literals.graphile-utils
- a collection of plugin generators to help you extend yourgraphile-engine
-based schema
In PostGraphile V5, we have split things up into more packages that each have a specific focus:
postgraphile
- much thinner now, acts as just the orchestration layer and CLIgraphile-config
- system for managing presets and pluginsgrafserv
- our Grafast optimized Node.js webserver interface layer - replaces the server/middleware that was previously inpostgraphile
grafast
- the runtime for our GraphQL schemas - performs planning and execution of GraphQL requests - replaces the "lookahead" system that was previously ingraphile-build
; not related to automatically building a GraphQL schema; completely generic - has no knowledge of databases@dataplan/pg
- "step classes" for Grafast to use to communicate with PostgreSQL databases; not related to automatically building a GraphQL schema@dataplan/json
- "step classes" for Grafast to use to parse/stringify JSONruru
- our Grafast enhanced GraphiQL distribution that can either be served bygrafserv
directly or used standalone on the CLIruru-components
- the underlying React components used inruru
graphile-build
- ability to build a GraphQL schema fromgraphile-config
plugins via thegather
andschema
phases, basic plugins for all GraphQL schemasgraphile-build-pg
- a collection of plugins forgraphile-build
that teach it about PostgreSQL databases, including introspecting the database and generating all the GraphQL types/fields/etc and their plans.pg-introspection
- a strongly typed PostgreSQL introspection librarypg-sql2
- build SQL via template literals.graphile-export
- ability to export an in-memory PostGraphile GraphQL schema to executable code that can be written to a filegraphile
- swiss army knife CLI with utilities for dealing with everything elsegraphile-utils
- a collection of plugin generators to help you extend yourgraphile-build
-based schemaeslint-plugin-graphile-export
- ESLint plugin to help ensure your code is compatible with being exported viagraphile-export
One major advantage of this approach is that when you export your GraphQL schema as executable code, very few of these dependencies will be needed, making your runtime dependencies much smaller. Another advantage is that it increases the audience for a lot of these modules, for example grafast
contains a drop-in replacement for graphql
's execute
method and can result in much faster GraphQL schemas even without going anywhere near schema autogeneration. A larger audience means more eyes on the code and ultimately higher quality software for everyone.