Async GraphQL with Rust: Part Two
- Part 1: Introduction
- Part 2: Data and Graphs (this article)
- Part 3: AuthN & AuthZ
- Part 4: Unit & Integration Testing
(This article has been cross-posted to the Formidable Blog.)
Part 2: Data and Graphs
Welcome back to my series covering Async GraphQL in Rust! This series is aimed at intermediate Rust developers who have grasped the basics and are ready to build production-ready server applications. Today's entry will cover database access with SeaORM, related data, entity services to make use of the data access code, GraphQL resolvers to connect those services to outside requests, the request context provided by async-graphql, and pulling it all together with Warp to handle HTTP requests.
I'll be approaching the application from the inside out, introducing the different pieces that work together to create a full-featured GraphQL API that is high performance and easily testable. These pieces are intended to work in a multithreaded context, because Warp and Hyper spawn threads to handle multiple requests in parallel. Because of this, I'll touch on Rust's shared ownership and data synchronization tools. The full code for the example project can be found on Github here.
Let's get started!
Database Models
As I mentioned in my previous article, I originally started out with SQLx - a pure Rust SQL toolkit made from the start to support async. It includes a unique compile-time checker based on Rust macros, which are a powerful feature that enables compile-time code generation that is highly flexible and tightly integrated with the build system. The code generated by macros is type-checked along with the rest of your code, and it can even do things like execute build-time routines and make network calls. SQLx uses it to connect to an active SQL database and run a variety of checks based on the SQL queries you write. It is able to check syntax, infer types, and a whole lot more - all during a standard Rust build with warnings and errors that can be easily displayed in your IDE just like other warnings or lint checks. I'm quite comfortable with raw SQL, so this seemed like a great option without the overhead of learning a new ORM and accepting the limitations it would impose.
This build-time static checking is indeed powerful, but it comes with some drawbacks. The first one is that you need an active database connection in order to build. This can be disabled with an environment variable flag in order to play nicely with CI tests, but it definitely adds an additional hoop to jump through when developing locally. Beyond just that minor tradeoff, which wouldn't be a dealbreaker for me on its own, it also makes dynamic queries exceedingly hard. This became the most difficult thing for me to work around as I set out to implement a very flexible GraphQL schema which allows client applications to decide how they want to filter and order the data they are requesting.
For example, a query may want to use different properties for filtering the data that is selected, finding rows with a particular value in a user_id
field or with a particular status
. For example, because of the way the SQLx queries are statically checked at build time it wasn't easy to include "where" clauses in some cases and exclude them in other cases.
SeaORM
To better support the kind of dynamic queries I want in my GraphQL API I took the plunge into the land of ORMs (object-relational mappers), selecting an up-and-coming library based on SQLx with strong unit testing functionality to make up for the lack of those static checks. The most popular ORM in the Rust ecosystem - Diesel - isn't the one that I chose, however. It doesn't provide the same kind of first-class async support that SQLx does, and it uses a domain-specific language for the model schema. I pulled in SeaORM instead, which is a pure Rust library that derives what it needs based on plain Rust structs.
To kick things off, I'll start with one of the simpler database entities that I model with SeaORM in my GraphQL API - "Shows". In the hypothetical Caster app, a Show has many Episodes that hosts and co-hosts participate in to discuss various topics. This is a simple Model with no foreign keys - Episodes are tied to a Show via the "show_id" foreign key field on the Episode model.
Database Migrations
To start with, there are a series of database schema migrations defined in the "migrations" folder at the top level of the project. These are SQL files generated by the SQLx CLI:
sqlx migrate add users_and_profiles
As I mentioned previously I am comfortable with SQL, so instead of using the built-in migrations that SeaORM provides I like to write migrations by hand. I typically use the excellent desktop tool DataGrip, and I like the aligned format that it uses when generating DDL statements:
create table users
(
id text default gen_random_ulid() not null
primary key,
created_at timestamp(3) default CURRENT_TIMESTAMP not null,
updated_at timestamp(3) default CURRENT_TIMESTAMP not null,
username text not null,
is_active boolean default true not null
);
create unique index users__username__unique
on users (username);
create trigger sync_users_updated_at before update on users for each row execute procedure sync_updated_at();
My id's are text
fields, and I use a custom gen_random_ulid()
function to generate Universally Unique Lexicographically Sortable Identifiers. I use a sync_updated_at
function to keep my updated_at
fields up to date with a Postgres trigger.
See the migrations directory for more examples.
To run these migrations, I run cargo make db-migrate
, which is a build target within the Makefile.toml. This is used by cargo-make, a great build tool to enhance Cargo for more workflows.
The Show Model
To define a SeaORM Model, I create a Rust file called show_model.rs, which is held within the libs/shows
folder that contains the caster_shows
crate. A "crate" is a Rust package with its own "Cargo.toml" file that defines dependencies. In the Caster application, there is a caster_api
crate held in apps/api
that uses Cargo's workspaces feature to pull in resolvers, services, models, and more from crates within the libs/
folder.
At a minimum SeaORM expects a struct called Model
for each entity, as well a Relation
struct defining relation accessors, and an ActiveModelBehavior
trait implementation for the ActiveModel
struct that is derived by "DeriveEntityModel". The whole file can be viewed on Github here.
Here are the important parts to focus on now:
/// The Show GraphQL and Database Model
#[derive(DeriveEntityModel, Deserialize, Serialize)]
#[sea_orm(table_name = "shows")]
pub struct Model {
/// The Show id
#[sea_orm(primary_key, column_type = "Text")]
pub id: String,
/// The date the Show was created
pub created_at: DateTime,
/// The date the Show was last updated
pub updated_at: DateTime,
/// The Show title
#[sea_orm(column_type = "Text")]
pub title: String,
/// An optional Show summary
#[sea_orm(column_type = "Text", nullable)]
pub summary: Option<String>,
/// ...
}
/// The Show GraphQL type is the same as the database Model
pub type Show = Model;
/// Show entity relationships
#[derive(EnumIter)]
pub enum Relation {}
impl ActiveModelBehavior for ActiveModel {}
The above snippet has been abbreviated quite a bit from what you'll find when you view the entire file. There are a number of other derive()
directives included, because in the final file on Github the Model
struct is performing triple duty as a GraphQL type and a Polar class for Oso authorization, as well as representing the database table. I'll skip over those other parts for now and focus just on the SeaORM table functionality.
The DeriveEntityModel
input for derive
tells Rust to use this struct to derive a variety of things that SeaORM needs in order to use it as an entity model. This includes things like enum variants for each column, an ActiveModel
for mutations, and more.
You'll also see several properties decorated with attribute macros. For example, the id
property is decorated with #[sea_orm(primary_key, column_type = "Text")]
. This tells SeaORM that the id
is the primary key, and the underlying database type for it should be text
. This is needed because String
can map to a variety of different types in the database, so I need to specify which one I want. This isn't necessary for the DateTime
fields because they are less ambiguous. On other properties you see nullable
added as well, which tells SeaORM that the given property is optional.
Below the Model
definition you'll also see pub type Show = Model
. Since the Model here also serves as the GraphQL type, this is just a convenience alias. As you'll see in the "Profile" model below, this won't always be the case. Sometimes your GraphQL model will differ from your Database model, and in that case Show
would be defined as a different struct entirely.
You'll notice that the Relation
enum is currently empty. The "Episode" model is related to a Show via a show_id
property, but in its current iteration there is no reverse relationship accessor to retrieve all Episodes for a particular Show. To add this feature, I can add to the Relation
enum and use the DeriveRelation
input like this:
#[derive(EnumIter, DeriveRelation)]
pub enum Relation {
#[sea_orm(has_many = "episode_model::Entity")]
Episode,
}
impl Related<episode::Entity> for Entity {
fn to() -> RelationDef {
Relation::Episode.def()
}
}
The EnumIter
input for derive
that was already there implements Iterable
to allow iteration over all enum variants, which can be useful in different circumstances. The Related
implementation tells SeaORM that my Show (the Entity
above) is related to the Episode Entity through the Relation::Episode
relationship.
On the other side of this relationship for the Episode model, you would define the relationship like this:
#[derive(EnumIter, DeriveRelation)]
pub enum Relation {
#[sea_orm(
belongs_to = "show_model::Entity",
from = "Column::ShowId",
to = "show_model::Column::Id"
)]
Show,
}
The Profile Model
The next model that I'll take a look at is the Profile model, which is more complex because the GraphQL representation differs somewhat from the Database representation. This is because the "email" and "user_id" fields should be censored when shown to users who are not authorized. The email field in particular differs between GraphQL where it is optional and the Database where it is required.
In the profile_model.rs
file, the GraphQL model is defined first:
/// The `Profile` GraphQL model
#[derive(Debug, Clone, Eq, PartialEq, Deserialize, PolarClass, Serialize, SimpleObject)]
pub struct Profile {
/// The `Profile` id
#[polar(attribute)]
pub id: String,
/// The date the `Profile` was created
pub created_at: DateTime,
/// The date the `Profile` was last updated
pub updated_at: DateTime,
/// The `Profile`'s email address
// (differs from DB)
// Optional because this field may be censored for unauthorized users
#[polar(attribute)]
pub email: Option<String>,
// ...
}
The "email" field is defined here as an Option<String>
because it can be omitted on censored responses for unauthorized users.
The censor()
method implementation comes next, followed by the database Model definition:
/// The `Profile` Database model
#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Deserialize, Serialize)]
#[sea_orm(table_name = "profiles")]
pub struct Model {
#[sea_orm(primary_key, column_type = "Text")]
pub id: String,
pub created_at: DateTime,
pub updated_at: DateTime,
#[sea_orm(column_type = "Text")]
pub email: String,
// ...
}
Here, the "email" field is defined as a non-nullable String
, because that column is required in the database table.
When the struct is all-in-one - acting as as the database Model, the GraphQL type, and a Polar class - the derive inputs are all together. When splitting up the GraphQL type and the database Model the inputs are split up, as you can see above. In this case, the GraphQL type acts as the Polar class as well, while the database Model stands alone.
Profiles are set up with a nullable user_id
field. This allows a User to potentially have multiple Profiles if needed, which could be helpful in situations where they may be a member of more than one Show and may want to present a customized profile with different context for each Show they co-host on. This is a feature that isn't implemented yet, but it is possible given the data model.
The relationship is implemented with the DeriveRelation
derive input:
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {
#[sea_orm(
belongs_to = "user_model::Entity",
from = "Column::UserId",
to = "user_model::Column::Id"
)]
User,
}
impl Related<user_model::Entity> for Entity {
fn to() -> RelationDef {
Relation::User.def()
}
}
You'll also see some other things going on further down in the file. First, there is an impl From
definition:
impl From<Model> for Profile {
fn from(model: Model) -> Self {
Self {
id: model.id,
created_at: model.created_at,
updated_at: model.updated_at,
email: Some(model.email),
// ...
}
}
}
This makes it easy to convert the database Model that is returned by SeaORM into the GraphQL type that I return with async-graphql. Implementing the From
trait enables both Profile::from()
calls and .into()
calls on Model
instances. Since the "email" is always present on records returned from the database, I wrap it in a Some()
definition to turn it into an Option
that has a present value.
Next, you see a new wrapper type, ProfileList
:
/// A wrapper around a `Vec<Profile>` to enable trait implementations
pub struct ProfileList(Vec<Profile>);
impl ProfileList {
/// Proxy to the `Vec` `len` method
pub fn len(&self) -> usize {
self.0.len()
}
/// Proxy to the `Vec` `is_empty` method
pub fn is_empty(&self) -> bool {
self.0.is_empty()
}
}
impl From<Vec<Model>> for ProfileList {
fn from(data: Vec<Model>) -> ProfileList {
ProfileList(data.into_iter().map(|p| p.into()).collect())
}
}
This exists specifically to work around Rust's orphan rule. If I didn't have the ProfileList
wrapper type, then I would be implementing the From
trait with Vec<Model>
as the "from" type and Vec<Profile>
as the "into" type. Since the Vec
type itself is defined in a completely different crate it is considered a "foreign" trait. If you try to write this:
impl From<Vec<Model>> for Vec<Profile> {
// ...
}
You'll hit a compile error:
error[E0117]: only traits defined in the current crate can be implemented for arbitrary types
Since Vec
is defined in a different crate, Vec<Profile>
is considered an "arbitrary" foreign type. To implement a trait for it, you need to wrap it in a "newtype" defined within the current crate. This is one of the downsides of separating your GraphQL and your database Model type, since that's the reason I need to call .into()
in the first place.
The next implementation you see is for a special case that you'll encounter when I talk about Services:
impl From<Vec<(Model, Option<User>)>> for ProfileList {
fn from(data: Vec<(Model, Option<User>)>) -> ProfileList {
ProfileList(
data.into_iter()
.map(|(profile, user)| Profile {
user,
..profile.into()
})
.collect(),
)
}
}
This takes the tuple that is returned by SeaORM when selected related data via automatic JOINs. You'll see an example of this in the Services section of this article, but for now just know that it returns a tuple with two elements - the Profile model, and an optional related User model. It then uses .into_iter()
to iterate over that tuple and turn it into a Profile GraphQL type instance with the Option<User>
property supplied by the results of the JOIN. The ..
notation "spreads" properties which aren't explicitly defined above into the new object. The .collect()
call finalizes and executes the iteration.
The final implementation for ProfileList
implements conversion in the other direction - from ProfileList
to Vec<Profile>
. This uses the fact that newtype wrapped values are stored in a 0
property:
impl From<ProfileList> for Vec<Profile> {
fn from(profiles: ProfileList) -> Vec<Profile> {
profiles.0
}
}
Below these definitions you see identical ones for ProfileOption
, which covers the foreign Option
type similarly to how the Vec
type is handled.
Entity Services
The Services for each entity are responsible for business logic. They use the Models I defined above to access the data needed for each operation. A top-level Trait is defined, which allows me to decouple each Service from the data-access Models, make it easy to provide alternative implementations based on different data sources, and make it easy to replace with a mock service for unit testing. The signature of the trait establishes a "contract" with consumers of the service, defining the external API of the Service regardless of what data source the actual implementation uses.
#[cfg_attr(test, automock)]
#[async_trait]
pub trait ShowsService: Sync + Send {
/// Get an individual `Show` by id, returning the Model instance for updating
async fn get_model(&self, id: &str) -> Result<Option<show_model::Model>>;
// ...
}
You see the standard "get", "get_many", "create", "update", and "delete" methods that you would expect for an entity service. These are all implemented further below, as part of the DefaultShowsService
. These methods make use of the &self
argument, which is a special optional argument at the beginning of a method that contains a borrowed reference to the struct instance itself. This automatically controls whether the method behaves like a "static method" or an "instance method" in traditional OOP languages.
If &self
or self
are present, then the method can be called using the dot syntax from an instance of the struct the method is implemented for. If neither are present, the method is treated as a static method which you call using the struct name and the double-colon syntax. If you omit the &
reference symbol, then the instance of of the struct isn't borrowed - ownership is transferred to the method itself. This indicates that calling the method "consumes" the struct instance, and you can't continue using the instance elsewhere.
You also see some attribute macros: "cfg_attr" and "async_trait". The "cfg_attr" attribute enables conditional compilation in Rust, indicating that the "automock" macro should be used when the project is compiled for "test". This makes use of the mockall crate to generate automatic mocks for this service for use in unit testing, but it excludes these mocks when building for Production. The "async_trait" macro allows you to use async
methods in your traits, which is not currently supported by default in the latest edition of Rust (2021).
The DefaultShowsService
is the struct for the default implementation of the ShowsService
trait. It's what the main GraphQL application uses in Production, as opposed to any test mocks or alternative implementations that are used elsewhere. There is one property defined - "db":
pub struct DefaultShowsService {
/// The SeaOrm database connection
db: Arc<DatabaseConnection>,
}
I'm using a special wrapper here, "Arc". This is because my application is multi-threaded because of how Hyper works, but I want to share a single database connection pool between all of my threads. The Arc type provided by the Rust standard library allows for shared ownership of a value with a thread-safe atomic reference counter. Calling .clone()
on the Arc produces a new instance that points to the same allocation on the memory heap, while increasing the reference count. When the last pointer to a given allocation is dropped, the value stored in that allocation is also dropped.
Since my SeaORM DatabaseConnection
is immutable, I can wrap it in an Arc without using any synchronization structures like "Mutex" or "RwLock". If I wanted to wrap something mutable, I'd need to bring something else in to manage locking.
Finally, you come to the implementations of ShowsService
for the DefaultShowsService
struct, which finally defines the actual methods for the default SeaORM data source:
#[async_trait]
impl ShowsService for DefaultShowsService {
async fn get(&self, id: &str) -> Result<Option<show_model::Model>> {
let query = show_model::Entity::find_by_id(id.to_owned());
let show = query.one(&*self.db).await?;
Ok(show)
}
// ...
}
As with the definition of the trait above, this implementation of the trait uses the async_trait attribute macro to enable the use of async
methods. The implementation of the first method - "get" - is quite simple. It uses the SeaORM Show Entity
to build a SQL query that finds a single record by id. There's a tricky part going on here that I should explain:
query.one(&*self.db)
The asterisk character (*
) denotes a "dereference". When you dereference an Arc, which is what the DatabaseConnection
is wrapped in, you get the underlying instance of what the Arc is wrapping. This is tracked by Rust's ownership system to make sure it isn't used in an unsafe way, and it can lead to some tricky compilation errors if mis-handled. In this case, I immediately allow the .one()
method to borrow the value by reference - which is a safe thing to do. The .one()
method never has to know that you are keeping the DatabaseConnection
inside an Arc.
The .await
keyword is used to wait for the "Future" returned by the async method call to complete before proceeding. The question mark (?
) at the end allows us to resolve the "Result" that is returned by the Future using the question mark operator provided by Rust. This makes it very clean to handle recoverable error cases in Rust, as opposed to unrecoverable panic
states that typically crash the program immediately. This syntactic sugar makes it very ergonomic in most cases to work with Rust Result types and Errors.
Finally, I return Ok(show)
at the end of the method to indicate that the operation completed successfully (rather than hitting an error case after awaiting .one()
). Since it's the last statement in the method, I can omit the return
keyword and the closing semicolon.
The rich SeaORM tools allow you to model many different kinds of SQL queries in a type-safe way that is easy to test with their built-in DatabaseConnection
mock. Dynamic queries are easy using the enums derived by SeaORM's DeriveEntityModel
macro. Look at the get_many
method for an example:
let mut query = show_model::Entity::find();
if let Some(condition) = condition {
if let Some(title) = condition.title {
query = query.filter(show_model::Column::Title.eq(title));
}
}
The block above checks first to see if any conditions were provided when the method was called. Then, it checks specifically for the "title" condition. If found, it adds a WHERE clause to the SQL query using the .filter()
method and the automatically derived Column::Title
. For more information here, peruse the SeaORM documentation.
For ordering, I use a special ShowsOrderBy
struct that defines ascending or descending ordering.
#[derive(Enum, Copy, Clone, Eq, PartialEq)]
pub enum ShowsOrderBy {
/// Order ascending by "id"
IdAsc,
/// Order descending by "id"
IdDesc,
/// Order ascending by "title"
TitleAsc,
/// Order descending by "title"
TitleDesc,
// ...
}
This structure is easy to translate into SeaORM's ordering calls:
if let Some(order_by) = order_by {
for order in order_by {
let ordering: Ordering<show_model::Column> = order.into();
match ordering {
Ordering::Asc(column) => {
query = query.order_by_asc(column);
}
Ordering::Desc(column) => {
query = query.order_by_desc(column);
}
}
}
}
For handling newtype structures like the ProfileList
and ProfileOption
types mentioned above, the .into()
method is used - sometimes with a required type annotation to clue the compiler in to your intentions:
async fn get(&self, id: &str, with_user: &bool) -> Result<Option<Profile>> {
let query = profile_model::Entity::find_by_id(id.to_owned());
let profile = if *with_user {
query
.find_also_related(user_model::Entity)
.one(&*self.db)
.await?
} else {
query.one(&*self.db).await?.map(|u| (u, None))
};
let profile: ProfileOption = profile.into();
Ok(profile.into())
}
This example from the profiles_service.rs file shows a dynamic situation where the user may want to request a Profile with or without the related User. In the case where the related user is requested, the return value is Option<(Profile, Option<User>)>
. That is the initial type of let profile
the first time you encounter it above. In the case where the related user isn't needed, the type is converted to the same as in the other case using the .map()
function and None
.
The first profile.into()
call with an explicit ProfileOption
annotation converts the value from the initial type that SeaORM returns - Option<(Profile, Option<User>)>
- to a ProfileOption
using the trait implementation I covered earlier in profile_model.rs. This pulls the related User from the second argument in the tuple and puts it into the user
property on the GraphQL type. The second .into()
call converts it from a ProfileOption
to an Option<Profile>
to discard the wrapper before returning the underlying value to the caller.
Finally, to support pagination the .paginate()
method is called on the query
struct:
let (data, total) = if let Some(page_size) = page_size {
let paginator = query.paginate(&*self.db, page_size);
let total = paginator.num_items().await?;
let data: Vec<Show> = paginator.fetch_page(page_num - 1).await?;
(data, total)
} else {
let data: Vec<Show> = query.all(&*self.db).await?;
let total = data.len();
(data, total)
};
Ok(ManyResponse::new(data, total, page_num, page_size))
In the case where pagination was requested via the page_size
attribute, a paginator is used and the current page is fetched with .fetch_page()
. A COUNT query is used to obtain the total number of items available, as opposed to the number of items returned for the requested page. In the case where no pagination was requested, the total is determined entirely by the length of the list returned by the query. A ManyResponse
is returned at the end of the operation, which handles calculating things like the number of pages available. It can be found in the pagination.rs file.
GraphQL Resolvers
Next we come to the GraphQL resolver layer. This is similar to the controller layer in a traditional REST application. Its job is to translate an entry-point-specific request into something that the generic Service can handle, and then return an entry-point-specific response back to the caller. Errors are typically caught in this layer and translated into something that makes sense given the current entry point - including things like "extensions" for GraphQL errors or HTTP status codes for REST errors.
This is also the layer where authorization is performed, because the mechanism for authorization typically varies at least somewhat between different types of entry points. For example - a GraphQL operation may use claims from a decoded JWT token, while a serverless function handler may use context from the event that it was invoked with.
To start off, take a look at the show_queries.rs file. It first defines the ordering available to the GraphQL caller:
use ShowsOrderBy::{
CreatedAtAsc, CreatedAtDesc, IdAsc, IdDesc, TitleAsc, TitleDesc, UpdatedAtAsc, UpdatedAtDesc,
};
Then it defines a ShowPage
, which follows the structure from ManyResponse
which was mentioned above:
#[derive(Clone, Eq, PartialEq, SimpleObject)]
pub struct ShowsPage {
/// The list of `Shows` returned for the current page
data: Vec<Show>,
/// The number of `Shows` returned for the current page
count: usize,
/// Tne total number of `Shows` available
total: usize,
/// The current page
page: usize,
/// The number of pages available
page_count: usize,
}
The SimpleObject
derive input uses an async-graphql macro to automatically generate the GraphQL object type for this struct. You'll find a impl From<ManyResponse<Show>>
trait implementation right below this. Then we come to the first GraphQL input type:
#[derive(Clone, Eq, PartialEq, InputObject)]
pub struct ShowCondition {
/// The `Show`'s title
pub title: Option<String>,
}
This provides an optional title
for filtering results with. This is how WHERE conditions are applied to GraphQL requests. The InputObject
derive input here uses another macro to derive the GraphQL input type for this struct.
These automatically derived GraphQL types show up in the documentation retrieved by popular tools like Playground or Insomnia. You can also use tools like spectaql to automtically generate developer documentation to share with other teams at your organization. This built-in functionality is similar to what you would use OpenAPI and Swagger for on a REST API.
If you open up the show_mutations.rs file, you see the input types and response types used for Mutations. The MutateShowResult
result type allows you to add additional metadata to your response, such as the mutationId
used by Relay. That feature isn't implemented here yet, but this result type gives us an easy extension point to add things like this.
Now, take a look at the shows_resolver.rs file itself. It defines two main structs:
/// The Query segment owned by the Shows library
#[derive(Default)]
pub struct ShowsQuery {}
/// The Mutation segment for Shows
#[derive(Default)]
pub struct ShowsMutation {}
These both use the Default
derive macro, giving them a default empty implementation that is used later on during the initialization of the application. The ShowsQuery
struct is defined first:
#[Object]
impl ShowsQuery {
async fn get_show(
&self,
ctx: &Context<'_>,
#[graphql(desc = "The Show id")] id: String,
) -> Result<Option<Show>> {
let shows = ctx.data_unchecked::<Arc<dyn ShowsService>>();
Ok(shows.get(&id).await?)
}
// ...
}
The Object
macro from async-graphql is used to indicate that this implements a GraphQL Object type. In this case, the struct implements a portion of the top-level Query
object, while other entity resolvers fill in other operations. These are all merged together upon initializing the application. The #[graphql]
macro allows me to add GraphQL-specific detail to my fields, including the description that I added above.
The ctx: &Context<'_>
argument is a special async-graphql feature allowing you to make use of global data attached to the Schema or related to the current request. Inside this Context you can put things like application config, database connections, and instances of the Services I've been building. This is how the getShow
GraphQL query obtains the instance of ShowsService
that it needs to get things done. See the async-graphql docs for more information.
The dyn
keyword indicates that this is a dynamic implementation of ShowsService
. Any implementation will do! This could be the DefaultShowsService
, or an automatically mocked Service, or any other alternative implementation you provide. Rust uses dynamic dispatch behind the scenes to call the right methods. This sidesteps any issues with tracking and annotating lifetimes, which can be tricky - especially in situations like this where you have a handler for external requests that could live for an indeterminate amount of time.
Once I have an instance of ShowsService
that I can use, I call if with shows.get(&id).await?
. This allows the .get()
method to borrow the id
variable, awaits the result of the async method, and then resolves the Result that is returned.
Request Context
To put all of this together, I use a top level graphql.rs file within the caster_api
Crate. This is the entry-point that lives in the apps/
folder. This file first pulls in all of the Query and Mutation resolvers, and merges them into their top-level GraphQL Object types:
#[derive(MergedObject, Default)]
pub struct Query(UsersQuery, ProfilesQuery, ShowsQuery, EpisodesQuery);
#[derive(MergedObject, Default)]
pub struct Mutation(
UsersMutation,
ProfilesMutation,
ShowsMutation,
EpisodesMutation,
);
The MergedObject
derive macro from async-graphql is used to bring them all together. Then, I use the async-graphql Schema
type to compose them into a usable GraphQL schema:
pub type GraphQLSchema = Schema<Query, Mutation, EmptySubscription>;
I'm using EmptySubscription
because I haven't defined any Subscription resolvers for this application yet.
Next, I provide a create_schema()
method that is used by the application bootstrapping and initialization routine to get the schema ready for use within the Hyper/Warp handler.
pub fn create_schema(ctx: Arc<Context>) -> Result<GraphQLSchema> {
Ok(
Schema::build(Query::default(), Mutation::default(), EmptySubscription)
.data(ctx.config)
.data(ctx.oso.clone())
.data(ctx.users.clone())
.data(ctx.profiles.clone())
.data(ctx.role_grants.clone())
.data(ctx.shows.clone())
.data(ctx.episodes.clone())
.finish(),
)
}
This method takes an Arc with a Context
inside. I'll talk more about this in a moment, but this application context fills in instances for all of the Services and supporting structs that this application needs. Here I add the Users, Profiles, RoleGrants, Shows, and Episodes Services to the GraphQL schema context, as well as the Oso authorization helper and the application config.
Application Initialization
The application Context
is created inside the lib.rs file within the caster_api
crate, which lives in the apps/
folder:
impl Context {
/// Create a new set of dependencies based on the given shared resources
pub async fn init(config: &'static Config) -> Result<Self> {
let db = Arc::new(sea_orm::Database::connect(&config.database.url).await?);
// Set up authorization
let mut oso = Oso::new();
oso.register_class(User::get_polar_class_builder().name("User").build())?;
// ...
oso.load_str(&[PROFILES_AUTHZ, SHOWS_AUTHZ].join("\n"))?;
Ok(Self {
config,
users: Arc::new(DefaultUsersService::new(db.clone())),
profiles: Arc::new(DefaultProfilesService::new(db.clone())),
role_grants: Arc::new(DefaultRoleGrantsService::new(db.clone())),
shows: Arc::new(DefaultShowsService::new(db.clone())),
episodes: Arc::new(DefaultEpisodesService::new(db.clone())),
oso,
db,
})
}
}
It first creates a new DatabaseConnection
and wraps it into an Arc
so that it can be shared across threads. It then creates a new Oso instance and sets it up for evaluating authorization rules, which I'll talk about in a later blog post. It then initializes the Default implementations of each Service with a copy of the DatabaseConnection
Arc, making them available for use within the GraphQL schema as seen above.
A Config reference with a static
lifetime is used because this application's Config is immutable and can be shared across threads and even across invocations of the Tokio async runtime without any issues. The database connection is not static because it cannot be reused when the Tokio async runtime is stopped and started. This is a quirk that is typically not encountered in Production, but can be hit during integration testing where the Tokio runtime is reset between each test.
Below that, you see a run()
function:
pub async fn run(context: Arc<Context>) -> Result<(SocketAddr, impl Future<Output = ()>)> {
let port = context.config.port;
let jwks = get_jwks(context.config).await;
let schema = create_schema(context.clone())?;
let router = create_routes(context, schema, jwks);
Ok(warp::serve(
router
.with(warp::log("caster_api"))
.recover(errors::handle_rejection),
)
.bind_ephemeral(([0, 0, 0, 0], port)))
}
This is separated out from the main.rs
file - which I'll talk about in a moment - because it is also used by the integration test utils that I'll cover in a later blog post. It sets up the JWKS utilities needed for JWT token validation, and then creates instances of the GraphQL schema and the Warp router that serves it. It uses the .bind_ephemeral()
method so that the bound socket address can be determined by the caller to obtain the port when it is randomly assigned, as it is during integration testing.
The create_routes()
function comes from the router.rs file within caster_api
:
pub fn create_routes(
context: Arc<Context>,
schema: Schema<Query, Mutation, EmptySubscription>,
jwks: &'static JWKS,
) -> impl Filter<Extract = (impl Reply,), Error = Rejection> + Clone {
let graphql_post = graphql(schema)
// Add the Subject to the request handler
.and(with_auth(jwks))
// Add the UsersService to the request handler
.and(warp::any().map(move || context.users.clone()))
// Add details to the GraphQL request context
.and_then(with_context);
let graphql_playground = warp::path::end().and(warp::get()).map(|| {
HttpResponse::builder()
.header("content-type", "text/html")
.body(playground_source(GraphQLPlaygroundConfig::new("/")))
});
graphql_playground.or(graphql_post)
}
It takes the application Context
and the GraphQL schema and uses the async_graphql_warp
library to "mount" them within Warp. This makes the GraphQL schema and GraphQL Playground available for HTTP requests.
The .and(with_auth(jwks))
call is used to add the JWT token Subject
to the Warp request context, where it is used to retrieve the associated User if one exists. This uses a function from the authorization.rs file within the caster_auth
lib within the libs/
folder, which pulls the "Authorization" header from Warp and uses the biscuit library to decode and validate it.
The next line, .and(warp::any().map(move || context.users.clone()))
, adds the UsersService
instance provided within the Context
so that it can be used within the Warp filters.
Finally, the .and_then(with_context)
line calls the with_context()
function that is also defined within router.rs
:
async fn with_context(
(schema, request): (
Schema<Query, Mutation, EmptySubscription>,
async_graphql::Request,
),
sub: Subject,
users: Arc<dyn UsersService>,
) -> Result<GraphQLResponse, Infallible> {
// Retrieve the request User, if username is present
let user = if let Subject(Some(ref username)) = sub {
users.get_by_username(username, &true).await.unwrap_or(None)
} else {
None
};
// Add the Subject and optional User to the context
let request = request.data(sub).data(user);
let response = schema.execute(request).await;
Ok::<_, Infallible>(GraphQLResponse::from(response))
}
This function takes the Warp context including the GraphQL schema, the JWT token Subject, and the UsersService
and uses them to retrieve the related User if available, adding it to the async-graphql request context.
Finally, I have the main.rs file, which is the binary entry point for the caster_api
crate. When this Rust project is built, the main()
function here is what is executed.
#[tokio::main]
async fn main() -> Result<()> {
// Load variables from .env, failing silently
dotenv().ok();
// Set RUST_LOG=info (or your desired loglevel) to see logging
pretty_env_logger::init();
let config = get_config();
let context = Arc::new(Context::init(config).await?);
let (addr, server) = run(context).await?;
if config.is_dev() {
info!("Started at: http://localhost:{port}", port = addr.port());
info!(
"GraphQL at: http://localhost:{port}/graphql",
port = addr.port()
);
} else {
info!("Started on port: {port}", port = addr.port());
};
server.await;
Ok(())
}
It is wrapped in a #[tokio:main]
macro to kick off the Tokio event loop. It uses the dotenv
library to load values from .env
files, pulls together Config from the figment library, initializes the application Context
, and starts up the application with some dev-aware logging. To kick off the core application loop, it calls server.await
. Tada! I now have a running GraphQL API!
Next Time
Whew! I've gone through quite a journey in a small amount of time here. I've defined database models and GraphQL types, created resolvers to handle requests, and attached those resolvers to a Warp HTTP server.
Next time, I'll talk through how authorization with Oso works. Then, I'll cover unit and integration testing. After that I'll cover GraphQL Subscriptions and WebSocket events. Finally, I'll finish up by talking about building for deployment with containers with Github actions.
I hope you're enjoying my series and you're excited to begin building high-performance GraphQL applications with the sound type safety and memory protection that Rust is famous for! If you want to find me on Discord, you can find me at bkonkle#0217
. I lurk around several Rust and TypeScript Discord servers, and I'd love to hear from you! You can also find me on Twitter @bkonkle. Thanks for reading!