Master Complex Redux Workflows with Sagas
In recent weeks a fascinating new library has been getting attention in the React community - redux-saga. It's billed as an "alternative side-effect model" for Redux that uses ES'15 generators and competes with the fundamental redux-thunk middleware. As part of my in-progress quest for a new execution environment for React, I've been exploring ways to use Redux to manage state for a multi-process CLI application. I've settled on Sagas as the best way to get control of the rather complex workflow of spawning processes and coordinating events between them, and I think they're a really compelling example of the innovative new design patterns that generator functions are opening up to JavaScript developers today.
What are they?
The concept of "sagas" (or "process managers") is not new - it actually dates back to a 1987 paper by Hector Garcia-Molina and Kenneth Salem. There's a detailed article about sagas on MSDN, and they are widely used in the CQRS and ES community.
In the context of redux-saga, however, they allow you to use generators to set up long-running async routines that watch events dispatched to a Redux store. This allows you to establish watchers that react to events in ways that can be quite difficult with just store.subscribe()
.
I've found sagas to be very effective for managing complex workflows because of two big reasons - the ability to organize that workflow into composable watchers that respond to events, and the brilliant decision to abstract each operation into "Effects" so that they are easily testable without resorting to mocking and dependency injection.
What's wrong with subscribe?
The standard store.subscribe()
provided by Redux works very well with React because of the ability to make the UI a pure function of the state - the HTML is a predictable reflection of the state at a given moment. When other parts of your application (or applications that use Redux without React) need to respond to events - such as kicking off a process or generating a thumbnail in the background - it becomes difficult because subscribe()
doesn't provide any details about the event that triggered the update.
One way to work around this is by adding extra flags to your state. Need to fire off a background routine to validate new objects when they are added to a key in your Redux store? You could add a needsValidation
array and push id's to it each time a new object is added. You could then add a callback using store.subscribe()
that checks the needsValidation
array after each event is dispatched and kicks off routines as needed. You could establish another array called validationsInProgress
that you push the id's to so that you don't trigger duplicate validator routines if the same type of event fires again. It could then dispatch validation error events when the routines complete.
What if this workflow starts to get more complex, though? What if you need to keep track of failed calls to the validator and retry them? Now you have to add a failedValidations
key. What if you need to cancel and re-start validation if the same object is updated while validation is in-progress? What if you need to flag the objects that failed validation and move them to a review queue? Your state and your subscribe
callback begins to inflate with more and more logic branches and control flow flags, hiding the real meaning inside. You end up writing complex decision trees to check those myriad flags and determine what to do next.
The logical next step is to write middleware that watches events as they are dispatched, and that's exactly what redux-saga provides.
Why redux-saga?
The way redux-saga uses generators to manage async control flow is really inventive. Like the goroutines in js-csp, redux-saga uses generators to fork subroutines that act as async mini-processes - sagas. These sagas "wake up" when the desired event type is dispatched. Rather than executing operations directly sagas yield "effects", which are descriptors of the operation the saga wants to execute. That descriptor is passed to the redux-saga middleware via two-way generator communication, the operation is executed, and the result is passed back to the saga.
The advantage this gives is convenient and maintainable unit testing by taking over the role of the middleware in your tests. You call the generator directly, make sure the yielded effect - such as a function call or an action dispatch - is described correctly, and then you return a fake result. You're able to test your saga line-by-line without complicated mocks and dependency injection.
Why not redux-saga?
The biggest consideration when deciding whether to use redux-saga is generators. Browser support is getting better but it's not yet widespread. Babel polyfills generators using Facebook's regenerator, but if you're running in the browser you'll need to include the runtime which is 20kb+ minified and gzipped - nearly as big as React or jQuery. You'll want to make sure that doesn't break your front end bandwidth budget.
Generators also introduce a relatively steep learning curve, so you'll want to make sure it's worth it before you commit your team to the effort of learning them. It's not a paradigm that's familiar to JavaScript or Node developers, so it can take some time to get used to. The redux-thunk middleware is much simpler and potentially more obvious in many cases. You may end up using both - redux-thunk for simple operations and redux-saga for complex ones.
You also want to make sure you don't rely on sagas too much. You can run into the same pitfalls as a plain run-of-the-mill event emitter if you let side effects run amok and introduce unpredictable behavior. Carefully consider what needs to be a side effect and what should really be a pure state transformation.
Getting started
So you've decided that sagas are just the thing to cure your complex workflow pains? Great! Let's get started. First, you'll want to add the middleware to your store (using the Redux 3.1.0 api):
import {applyMiddleware, createStore} from 'redux'
import {myReducer, otherReducer} from './my/reducers'
import * as sagas from './my/sagas'
import sagaMiddleware from 'redux-saga'
const store = createStore(
combineReducers({myReducer, otherReducer}),
initialState,
applyMiddleware(sagaMiddleware(...sagas))
)
In your sagas.js
file (which will likely split into several files over time) you export generator functions that are executed as soon as the store is ready. In the ship-yard open source project I'm working on, a "foreman" process spawns multiple "workers" that watch for "goal" events. Here's an abbreviated example:
import {apply, call, fork, put, take} from 'redux-saga'
export default function* startForeman() {
yield fork(startTranspiler)
yield fork(startLinter)
}
The "fork" effect allows me to make non-blocking calls which I can later "join" to read the result. Here I'm forking concurrent routines for starting the transpiler and linter.
What does the "fork" function actually return? A description of the function call that your saga wants the middleware to carry out without blocking. It looks like this:
{
FORK: {
context: null,
fn: [Function: startTranspiler],
args: []
}
}
This is used by the middleware to make the function call, and the middleware pushes an object back to the saga that includes a promise. This object can be used with the join
effect later to wait for the return value of the saga.
To test this, I use generator.next()
to move to the next yield
statement and pass values back to the saga when needed:
describe('sagas/foreman', () => {
describe('startForeman()', () => {
const generator = startForeman()
it('forks the transpiler process', () => {
const result = generator.next()
expect(result.value).to.deep.equal(fork(foreman.startTranspiler))
})
it('forks the linter process', () => {
const result = generator.next()
expect(result.value).to.deep.equal(fork(foreman.startLinter))
})
The generator.next()
call returns a result object that includes metadata about the yield operation itself, including a value
property that is set to the value that was yield
-ed. This is standard ES'15 generator two-way communication.
Normally, I would use beforeEach
and afterEach
to sanitize my environment so that individual tests have no effect on each other. With sagas, however, I find it more convenient and maintainable to simply rely on the tests in a describe block being executed sequentially, which is the case if none of them are async tests. Each test carries the saga forward to the next yield
statement.
Without redux-saga, I may have had to resort to dependency injection with proxyquire
to replace startTranspiler
and startLinter
with sinon
spies and verify that they were called correctly without actually executing the functions and breaking unit isolation. Because of the "Effect" descriptor pattern, though, I'm able to merely describe the calls I want to make with fork
without actually making the calls. I use fork
in the tests as well to make sure the desired functions to call and parameters are correct.
If I wanted to test joining the routine after forking it, I would use a simple fake generator function in my tests, passing it to the saga by including it as a parameter to generator.next()
:
const fakeStartTranspiler = function* () {}
const generator = startForeman()
it('forks the transpiler process', () => {
const result = generator.next()
expect(result.value).to.deep.equal(fork(foreman.startTranspiler))
})
it('then joins the transpiler process', () => {
const result = generator.next(fakeStartTranspiler)
expect(result.value).to.deep.equal(join(fakeStartTranspiler))
})
Passing a value as the first argument of generator.next()
means it becomes the return value of the yield
statement
Waiting for events
In my startTranspiler
saga, I wait for the sub-process workers to launch and send a ready event back to the foreman process. These events are dispatched as Redux actions, which I can watch for with my utility saga waitForReady
:
import {launchWorker, waitForReady, waitForGoal} from 'utils/sagas'
import {transpile} from 'state/transpiler'
export function* startTranspiler() {
const transpiler = yield call(launchWorker, WORKER_TRANSPILER)
yield call(waitForReady, WORKER_TRANSPILER)
Behind the scenes, the waitForReady
and waitForGoal
utility sagas from ship-yard yield the take
effect from redux-saga, which tells the middleware to watch for the next Redux action of a given type and pass that action back to the saga. That action becomes the return value of the yield
call, so you can then use the action as your saga proceeds. In this case, startTranspiler
and startLinter
don't care about the details of the actions, so they don't assign them to anything.
When the worker is ready, the saga moves on:
while (true) {
yield call(waitForGoal, GOAL_TRANSPILE)
yield apply(transpiler, transpiler.send, [transpile()])
}
This sets up a permanent watcher (by wrapping it in a while (true)
) that will use waitForGoal
(another utility saga that uses take
behind the scenes) to respond to all GOAL_TRANSPILE
events.
While call
simply takes a function and calls it with the given arguments, apply
binds the function to a context before calling it. This allows you to operate on and test calls to object-oriented libraries that need to access the object instance they are associated with.
Digging into the source
Whenever you introduce a library to your codebase with a learning curve as high as redux-saga, I'd encourage you to spend some time getting to know the source. The main source is here, in the src directory. Here are some highlights:
- The middleware kicks off sagas and sends actions to them from the Redux store.
- The io.js module provides the tools that help you describe your effects.
- Those effects are actually executed and the results returned by the proc.js module.
- The tests are run with tape and are predictably housed in the test directory.
- There are some illuminating examples in the examples that can help shed some light on more involved procedures.
Tell your own tale
I hope I've been able to shed some light on the need that redux-saga meets, and how to get started with the library. I'd love to hear your about experience with it as you dig in! Find me on Twitter as @bkonkle, on Github as bkonkle, or on Facebook as brandon.konkle. I also frequent great open communities like Reactiflux and Denver Devs.
Thanks for reading!