I've been on a quest to make it easier to use GraphQL in a reliable and high-performance way using Node and TypeScript. I've tried a number of different approaches, and I'm forging ahead with PostGraphile and GraphQL Code Generator for introspection and automatic code generation as the best strategy for using GraphQL effectively with as little effort as possible.
Part 1 covered the goals of the architecture, and introduced how to use PostGraphile with Koa. Today's Part 2 will walk through how to extend the automatic schema and build business logic, and it will introduce GraphQL Code Generator to support consistent and accurate type-safe access to your data. Part 3 will introduce a React front end and show how to generate automatic types and components to use with Apollo Client.
Querying Like a Client
GraphQL gives API clients a rich language to use when requesting data, making it easy for users to dive into related data and select everything they need in one call. When building business logic for the API, however, we often set that aside and rely on ORMs or raw SQL to retrieve data from our sources. With the graphile
utilities that are added to each PostGraphile plugin request, however, we can execute GraphQL operations against the built-in schema as if we were an external client - without introducing the complexity of solutions like schema stitching or local socket HTTP calls.
For each request that a PostGraphile plugin hands off to a resolver, it provides an augmented resolveInfo
object. PostGraphile attaches a resolveInfo.graphile
property, and if you dig a bit deeper you'll find resolveInfo.graphile.build.graphql
. This is an instance of import GraphQL from 'graphql'
that executes against the local PostGraphile schema. Through the @graft/server
package that I'm currently working on, I've provided simple query
and mutation
functions that make it easy to execute operations against that GraphQL instance. The code can be found here, though documentation is sorely lacking at the moment. I intend to fill in more details there as I have time in the days ahead.
This isn't always the best way to write your business logic. There are many operations that you may be able to easily write a stored Postgres function for, which PostGraphile can automatically turn into a Query or a Mutation. For each situation you encounter, you'll want to take a moment to consider whether a Postgres function or a PostGraphile plugin is the right approach for that case.
One example of something you may want to build as a PostGraphile plugin would be an Invite User flow. This is a flow that likely requires you to interface with external services, and there are often libraries available for Node that make it easy to work with that service. The first thing I like to do is set up a core set of core CRUD functions that cover the automatically generated create-read-update-delete operations.
Covering the CRUD
For now, I'm implementing these as static functions in a controller module. Because of the TypeScript types needed to cover everything that goes into the operation, abstracting this out to generic functions ends up being nearly as verbose as writing out the individual functions themselves.
For our User type, it makes sense to add getAll
, getById
, create
, and update
functions to cover the core functionality. I tend to avoid delete
for Users, preferring to "deactivate" them rather than remove their rows in most applications.
Let's start with a couple of prerequisite types that I'll often add to a top-level Config
module:
export interface Context {
user: MyUser
}
export type AppRequest = PostGraphileUtils.GraphileRequest<Context>
The Context
type covers the results of the
function that we passed into the PostGraphile config in Part 1. This allows you to pass through things like authentication data from your middleware. I like to define an additionalGraphQLContextFromRequest
AppRequest
type alias that passes my custom Context
type to GraphileRequest
automatically. This GraphileRequest
is defined in my @graft/server
module, and it bundles up some tools provided by PostGraphile and makes sure they are well-typed.
To perform a simple getAll
operation, I define a function that requires a lot of TypeScript types:
import {query, mutation} from '@graft/server'
export const getAll = async (
request: AppRequest,
condition?: UserCondition
): Promise<UserDataFragment[]> => {
const users = await query<
AllUsersQuery,
AllUsersQueryVariables
>(request, {
query: AllUsersDocument,
variables: {
condition,
},
})
const allUsers = users && users.allUsers
return allUsers
? (allUsers.nodes.filter(Boolean) as UserDataFragment[])
: []
}
To start with, I pass in my custom AppRequest
type. Then, I pull in a UserCondition
type to cover the various options for filtering users that PostGraphile provides. I pull in a UserDataFragment
to cover the response object that I want my function to drill down to once the Promise is resolved. I pull in AllUsersQuery
to cover the shape of the full response payload, and AllUsersQueryVariables
to cover the variables that the Query requires. Finally, AllUsersDocument
is the actual GraphQL document (the return value of the gql
utility that many of us use) we're operating on.
Whew! 😅 Where are all of these types supposed to come from?? Well, previously, I would have said "We have to actually define them somewhere", while choking back the tears. This has certainly been my experience in the past on the client-side, defining types to use with Apollo Client. Today, however, I joyfully rely on GraphQL Code Generator to do the work for me!
The Next Generation
First, I install some new dependencies:
yarn add --dev @graphql-codegen/add @graphql-codegen/cli @graphql-codegen/typescript @graphql-codegen/typescript-operations @graphql-codegen/typescript-react-apollo
I add a simple codegen.yml
to the top-level folder of my project:
schema: 'http://localhost:8000/graphql'
documents: ./src/**/*.graphql
generates:
src/Schema.ts:
plugins:
- add: '// WARNING: This file is automatically generated. Do not edit.'
- typescript
- typescript-operations
- typescript-react-apollo
config:
namingConvention: change-case#pascalCase
withHOC: false
withComponent: false
withMutationFn: false
When I run code generation, I need to have my PostGraphile API running at http://localhost:8000/graphql. The Code Generator uses that endpoint to generate the core typescript
and typescript-operations
content that gets saved to src/Schema.ts
in the example above.
Alternatively, I could use PostGraphile's exportGqlSchemaPath
option to automatically generate a full .graphql
file with the schema, which I could point Code Generator to instead of the URL. Some IDE tools can also use this to provide in-editor validation of client-side .graphql
documents, though some tools allow you to point to a URL as well. This may lead to a more optimized workflow as your tools only have to do the work of reading it when the file actually changes. I haven't implemented this strategy yet, myself, but I plan to give it a try in the days ahead.
You might see typescript-react-apollo
up there and think that I'm making a typo, but I'm intentionally using it to match up with the types that I'm generating for my React clients, so that things are more consistent between the client and API code. I disable the withHOC
, withComponent
, and withMutationFn
options so that I'm not actually pulling in Apollo - just generating the supporting types. This Code Generator plugin is where the AllUsersDocument
, AllUsersQuery
, AllUsersQueryVariables
, and UserDataFragment
types all come from.
The definition of a documents
property with ./src/**/*.graphql
as the value here is important - to make use of schema operations as a client, you need to define Documents to describe each usage of the schema. This is what the gql
definitions in a typical Apollo Client project do. For my purposes, I define them together in .graphql
files so that they can be easily ingested by Code Generator. Here's what my users.graphql
file looks like:
fragment UserData on User {
id
isActive
inviteStatus
username
createdAt
updatedAt
}
query allUsers($condition: UserCondition) {
allUsers(condition: $condition) {
nodes {
...UserData
}
}
}
query userById($id: Int!) {
userById(id: $id) {
...UserData
}
}
mutation createUser($input: CreateUserInput!) {
createUser(input: $input) {
clientMutationId
user {
...UserData
}
}
}
mutation updateUserById($input: UpdateUserByIdInput!) {
updateUserById(input: $input) {
clientMutationId
user {
...UserData
}
}
}
The fragment UserData on User
expression is what generates the UserDataFragment
that we use as the result of the Promise. The allUsers
expression generates the AllUsersDocument
, AllUsersQuery
, and AllUsersQueryVariables
types. The UserCondition
type comes from the core typescript
plugin for Code Generator, as do the other input types.
To run code generation, I add a new package.json
script:
"generate.schema": "graphql-codegen",
Now, I can run yarn generate.schema
, and watch Code Generator connect to my endpoint, scan my documents, and generate a wealth of code I didn't have to write by hand!
Instant Validation
I've found this to be a really effective strategy for easy type-safety and useful autocomplete that is consistently up to date. When the database schema changes, the GraphQL schema changes, and the resulting generated types change. All right away, automatically.
This ends up providing an unexpected service - typo prevention! For example, here's a real world error I encountered recently on a Code Generator project:
✖ src/Schema.ts
AggregateError:
GraphQLDocumentError: Cannot query field "createdByUserId" on type "Init
iative". Did you mean "createdUserId", "primaryUserId", or "userByCreatedUserId"
?
at ./src/initiatives/initiatives.graphql:4:3
Within moments, I was able to easily identify that I meant createdUserId
, not createdByUserId
. This is the kind of typo that can take a deceptively long time to solve without similar validation.
More CRUD
To finish out my CRUD operations, I define the following functions:
export const getById = async (
request: AppRequest,
id: number
): Promise<UserDataFragment | undefined> => {
const user = await query<
UserByIdQuery,
UserByIdQueryVariables
>(request, {
query: UserByIdDocument,
variables: {id},
})
return (user && user.userById) || undefined
}
export const create = async (
request: AppRequest,
userInput: UserInput
): Promise<UserDataFragment> => {
const result = await mutation<
CreateUserMutation,
CreateUserMutationVariables
>(request, {
mutation: CreateUserDocument,
variables: {
input: {
user: userInput,
},
},
})
const createUser = result && result.createUser
const user = createUser && createUser.user
if (!user) {
throw new Error('Unable to create User')
}
return user
}
export const update = async (
request: AppRequest,
id: number,
userPatch: UserPatch
): Promise<UserDataFragment> => {
const result = await mutation<
UpdateUserByIdMutation,
UpdateUserByIdMutationVariables
>(request, {
mutation: UpdateUserByIdDocument,
variables: {
input: {id, userPatch},
},
})
const updateUser = result && result.updateUserById
const user = updateUser && updateUser.user
if (!user) {
throw new Error('Unable to update User')
}
return user
}
export default {getAll, getById, create, update}
As you can see, they follow the same general pattern as getAll
. The create
and update
functions throw errors if the created or updated user is missing in the response, which indicates a problem with the operation.
I use a default export at the bottom so that I can easily import it like this: import User from './UserController
without needing to use the asterisk syntax.
Extending the Schema
Now that we've covered the CRUD, we can implement our custom Mutation using the makeExtendSchemaPlugin
function provided by the graphile-utils
package. Let's look at the structure before we examine the contents:
export const inviteUser = makeExtendSchemaPlugin({
typeDefs: /* ... */,
resolvers: {
Mutation: {
inviteUser: /* ... */,
},
},
})
In this form, makeExtendSchemaPlugin
accepts an object with properties for typeDefs
and resolvers
. The typeDefs
property allows you to extend existing types (like Query or Mutation) and define any new types that you need to implement your custom operation. The resolvers
property allows you to implement resolvers to handle any new properties or operations that you defined in the typeDefs
.
For our inviteUser
, operation, I set up my typeDefs
to provide a payload type just as PostGraphile does for its automatically generated schema. This allows me to extend that payload with additional types later if desired without changing the signature of my Mutation, and it complies with the Relay Input Object Mutation Specification.
typeDefs: gql`
input InviteInput {
username: String!
message: String
selfSignup: Boolean
}
input InviteUserInput {
clientMutationId: String
invite: InviteInput!
}
type InviteUserPayload {
clientMutationId: String
user: User @pgField
}
extend type Mutation {
inviteUser(input: InviteUserInput!): InviteUserPayload
}
`,
I add a username
input, an optional message
input that we can pass along to the email template, and a selfSignup
input that helps us determine which email template to use. I provide clientMutationId
to act as a marker that the client can make use of to track specific requests and responses if desired. I define a top-level input type that accepts the clientMutationId
and the invite
itself as named properties, and then I extend
the Mutation
type to define an operation that makes use of my new types - inviteUser
.
The InviteUserPayload
includes a user
property tagged with a @pgField
annotation. This is important - it tells PostGraphile that this is data it will need to effeciently retrieve from the database using its query planning features.
To handle my invite, I implement a resolver method:
resolvers: {
Mutation: {
inviteUser: async (_query, args, context: Context, resolveInfo) => {
const request = createRequest(context, resolveInfo)
const input: InviteUserInput = args.input
const user = await User.inviteUser(request, input.invite)
const {
build: {sql},
} = request
const [row] = await resolveInfo.graphile.selectGraphQLResultFromTable(
sql.fragment`"user"`,
(tableAlias, sqlBuilder) => {
sqlBuilder.where(
sql.fragment`${tableAlias}.id = ${sql.value(user.id)}`
)
sqlBuilder.limit(1)
}
)
return {
clientMutationId: input.clientMutationId,
data: row,
}
},
},
},
There's a lot going on here, but most of it is to support the efficient query and join planning that PostGraphile does behind the scenes to optimize performance and avoid the N+1 problem I mentioned in Part 1.
const request = createRequest(context, resolveInfo)
takes the context
and the resolveInfo
and bundles it up into the convenient GraphileRequest
type that my @graft/server
package works with.
const input: InviteUserInput = args.input
takes the loosely typed args parameter and pulls in the automatically generated InviteUserInput
type that Code Generator provides. To make this type available, you will need to run PostGraphile
with the typeDefs
in your plugin defined, but with an empty resolvers: {},
value. This allows PostGraphile to add your custom types to the schema for Code Generator to consume, without needing you to finish your resolver implementation yet.
const user = await User.inviteUser(request, input.invite)
makes use of a UserController
method that I haven't shared here yet. I use import User from './UserController'
to pull it in. The result is a UserDataFragment
object, typically one returned by the create
method in the controller.
Fragmented SQL
To enable the efficient query planning, PostGraphile works with SQL "query fragments" that can later be combined together into as few requests as possible. Now that we've created a new User, we want to allow our users to select any related data they need from the User row that is returned in our GraphQL response.
To do this, we use resolveInfo.graphile.selectGraphQLResultFromTable()
to describe the portion of the SQL query that PostGraphile needs to find the data associated with the type we added the @pgField
annotation to in the typeDefs
. PostGraphile takes the data
property that is returned as part of the resolver response, and uses the fields that the user selected to plan an efficient set of joins on any related data. It then uses the results to populate the user
response field.
sql.fragment"user"
tells PostGraphile which table it should select from, and the (tableAlias, sqlBuilder) => {}
callback allows me to add a WHERE and a LIMIT clause.
The final return gives PostGraphile the clientMutationId
back so that the client can correlate the response if desired, and it passes the row
result (destructured from an array, taking the first element) as the value of the data
property.
For more detail on this, check out the excellent PostGraphile documentation on makeExtendSchemaPlugin
.
Implementing the Invite
The actual controller code to implement the User.inviteUser()
method will vary quite a bit depending on your particular application and the authentication solution you use. Let's take a look at a simple hypothetical implementation, however.
The first thing you might want to do is determine the user who is doing the inviting, and you might want to reject requests that aren't "self-signup" if there's no currently logged-in user doing the inviting. Let's extend our UserController.ts
module:
export const inviteUser = async (request: AppRequest, invite: InviteInput) => {
const {context: {user: requestedBy}} = request
const {selfSignup, username, message} = invite
if (!selfSignup && !requestedBy) {
throw new Error('You must be logged-in to invite a new user')
}
// ...
}
export default {getAll, getById, create, update, inviteUser}
I take the request
and invite
arguments that I defined in the resolver and passed into the controller function, and I pull out some details from them to check the selfSignup
parameter and look for a user in the context. If things don't line up, I throw an Error - which is caught by PostGraphile and handled according to the config callback we defined in Part 1.
I might want to check for an existing User row next:
const users = await getAll(request, {username})
const existingUser: UserDataFragment | undefined = users[0]
I use the username
parameter that I pulled out of invite
using destructuring, and I pass it as my UserCondition
when calling getAll
. I grab the zero-index, and override the type because I know that the zero-index may be undefined if no results were found.
Let's say that I want to send my email anyway, regardless of whether a user was found or created:
const user = existingUser || await create(request, {
username,
isActive: true,
inviteStatus: InviteStatus.Invited,
})
await sendInviteEmail({
selfSignup,
requestedBy,
message,
email: username,
existing: !!existingUser,
})
return user
This code would only execute the create
operation if there was no existingUser
retrieved. The InviteStatus
type is one that Code Generator automatically generates based on my Postgres enum. In Knex, I defined that enum in a migration like this:
await knex.raw(`
create type invite_status as enum (
'INVITED',
'ACCEPTED'
);
`)
Later, in a separate acceptInvite
mutation, I can update the User to set the inviteStatus
to InviteStatus.Accepted
.
After using an external function to actually send the invite email through a transactional email provider, I return my existing or created user
back to the resolver, which uses the id
to define the SQL fragment we talked about earlier.
A Natural Fit
The query language of GraphQL is fantastic, and with this structure I've found a delightful way to query my database from within my API business logic as if I were an external client, making it easy to use the internal, automatically generated schema in the same way that your external clients do. This often leads to code that is very similar between the API and JavaScript-based clients.
Front End engineers can jump into the backend code and follow along, understanding the operations and the flow much more easily than in an ORM or SQL based system that uses different abstractions. The easier it is for devs to flex their Full Stack muscles and make contributions to the backend, the more your dedicated Back End engineers can focus on the tools, the strategies, and the big picture.
With the easy validation that Code Generator provides, and freedom from the burden of writing potentially hundreds or thousands of TypeScript types by hand to match your schema, this has proven to be a smoothly scalable solution that supports large teams and large API surface areas while providing outstanding performance.
Stay tuned for Part 3, where I'll walk through a simple client-side implementation making use of the same Code Generator types, wiring them into Apollo Client components for easy consumption from a React application in the browser or on a mobile device.
In the meantime, you can find me on Twitter @bkonkle, or on the Denver Devs Slack community!