Table of contents
Late last year, we began the process of revamping our documentation by mapping it out using OpenAPI. This tool provides a way to neatly illustrate the structure of a RESTful API such as Lob’s. Using OpenAPI, we wrote detailed descriptions of each of our endpoints and their associated properties. Oftentimes, an OpenAPI description of a small API is confined to a single YAML
file, but in the case of a large API like Lob's, which contains many endpoints, each of which has multiple operations tied to it, a single-file approach would be unwieldy to handle, to the point of impossibility. Therefore, we separated out each endpoint and its operations, creating models for our resources and referencing them where necessary.
Lob’s API specification is a multifile spec organized semantically, by resource, instead of syntactically, by OpenAPI element. Organizing the spec semantically reduces cognitive friction, helping developers reason from interaction (endpoints) to data (and process) design.
The more we added to the codebase, the more we realized that we needed a variety of tools besides OpenAPI in order to accomplish our goal.
Why Spectral?
Working with a large set of YAML files can be quite headache-inducing for a few reasons. Oftentimes, serializations of large amounts of data (say, an entire RESTful API) involve a high degree of complexity: many different YAML objects, often extrapolated into a series of files since working on a single enormous YAML file can quickly get unwieldy. Misconfigurations and errors within this maze of objects and references are like needles in a haystack: nigh impossible to pick out manually. Missing just one of these could lead to a breakdown of the API interface being mapped out.
This is why linters play an integral role in many large codebases. A linter is a tool that, when run, picks out errors in code and displays them for easy fixing.
In Lob OpenAPI, that's where Spectral comes in. A good linter will find the needles for you, so you don't have to spend hours picking through a bale of erroring YAML. Spectral, developed by Stoplight.io, is one such tool. Unlike some other YAML linters like SwaggerHub, Spectral is open-source and free to use. It is also a CLI tool, yielding comprehensive error messages with a simple command & therefore rendering it easy to ensure that one's YAML files are neat and able to be bundled.
Why Prism?
Just as Spectral catches syntax errors, another testing system is necessary: one which determines whether the API itself is specced out correctly. The best way to do that is, of course, to make requests to the API, and see whether responses are returned correctly.
But there's an issue: APIs are how servers communicate with each other. To test, say, Lob's API, you need a server that will communicate with Lob's servers to send and receive requests and responses. That comes with its own set of issues: your server might suffer from downtime, timeouts, and the like. What you truly need, then, is something that functions like a server, but without the nasty downsides which can go hand-in-hand with spinning up an actual server.
Enter Prism. Like Spectral, it's an open-source Stoplight.io product that can be used to mock a server for testing purposes. It's ridiculously easy to use, and ridiculously effective to boot.
##Spectral in Lob OpenAPI
In Lob OpenAPI, Spectral isn't referenced within the tests or any YAML files. Being a linter tool, it is, however, called by a set of scripts described in our package.json
:
The lint script, the most simple one, calls on Spectral to determine whether the main spec file, lob-api-public.yml
, is configured correctly. We do not actually need to specify every file name, because every file referenced in lob-api-public.yml
will be checked (so too will the files they reference, and so on).
The spectral script includes two flags: fail-severity
is set to error
, and display-only-failures
. This makes the output cleaner since Spectral can occasionally produce warnings which don't have much bearing on the quality of the YAML files.
spectral-warn
changes the fail-severity
to warn
, providing a quick option for when it's important to check out warnings as well.
As seen in the above excerpt from a GitHub Action, Spectral is also called on push
, thus ensuring that files are checked for proper syntax before they can be merged to main
.
Prism in Lob OpenAPI
In tests/setup.js
within Lob OpenAPI, we configure Prism thusly:
As shown here, Prism's mock server is given a URL with which to communicate. We then create a Prism constructor which sets the authentication header and the specification file using parameter values, then uses configurePrism
to set the Prism options:
This in turn is exported from setup
, to be used by the other test files with custom input:
The specFile
in question is, just as it was in the previous section,lob-api-public.yml
. lobUri
describes the API's target URL–in this case, api.lob.com/v1. Most APIs, including Lob's, will require requests to have some kind of authentication token, which is what the LOB_API_TEST_TOKEN
(stored in the runtime environment) represents.
This is how Prism is used:
After setting up the mock server, the client is passed into a chained promise which sends aPOST
request to a specified resource endpoint (in this case, templates
), with a properly configured request body and authentication header. The response is then captured and tested using the testing framework of your choice.
The downsides
Being a free, open-source tool, Spectral is a great choice for API linting. It does, however, possess one drawback: depending on the number of nested YAML descriptions in your API spec, some of Spectral's error messages can be rather opaque and difficult to trace. Quite often, I've found that the file and line number included in the error is not the actual source of the problem, but instead, contains a reference to a file which is. Similarly, the problem may also not always be identical to the description, particularly in the case of indentation errors.
Nevertheless, given the nature of YAML linting, some of these issues are difficult to resolve. When I spoke to a representative of Stoplight.io, he pledged to look into it, so perhaps a future version of Spectral will improve even further.
I haven't actually noticed any real downsides to Prism. I do warn that it has in-built errors which surface when a request is badly formatted, so if you're testing for a failure response from the API and you receive an error from Prism instead of one generated by the API you're testing, make sure your Prism is set up to mute its own errors:
Conclusion
Creating an API spec involves a lot more components than just writing YAML files. A well-formed spec includes a variety of comprehensive tests and checks to ensure that everything is properly formatted and describes the requests and responses correctly. Understanding tools that can be used to tackle these problems, such as Spectral and Prism, is a vital part of mapping out any API with OpenAPI.