During all the research I did for my CSS testing talk, I couldn't help but spot another gap where a testing tool could be useful.
Cucumber
Cucumber is a technology used widely in automated testing setups, mostly for acceptance testing - ensuring that the thing everybody agreed on at the beginning was the thing delivered at the end.
This is accomplished by having a set of plain text files containing descriptions of different scenarios or aspects of the application, usually with a description of the actions performed by an imaginary user. You describe the situation (known as a 'Given' step), describe the user's action ('When') and describe the expected outcome ('Then').
The language (properly known as gherkin) used in these files is deliberately simple and jargon-free so that all the key stakeholders in the project - designers, developers, product owners - can understand but the files are also written in a predictable and structured style so that they can, behind the scenes, be turned into testable code.
What occurred to me when looking into this area was that there wasn't an agreed terminology for specifying the layout/colour/look and feel of a project in plain text. Surely this would be the perfect place to drop in some cucumber salad.
What we've got now is a project based on SpookyJS - a way of controlling CasperJS (and, therefore PhantomJS) from NodeJS - which contains the GhostStory testing steps and their corresponding 'behind the scenes' test code. There are only two steps at the moment but they are the most fundamental which can be used to build up future steps.
Implemented Steps
Here, "Element descriptor" is a non-dev-readable description of the element you want to test - "Main title", "Left-hand navigation", "Hero area call-to-action". In the project, you keep a mapping file, selectors.json
, which translates between these descriptions and the CSS selector used to identify the element in tests.
Then the "Element descriptor" should have "property" of "value"
This is using the computed styles on an element and checking to see if they are what you expect them to be. I talked about something similar to this before in an earlier post. This is related to the 'Frozen DOM' approach that my first attempt at a CSS testing tool, cssert, uses but this way does not actually involve a DOM snapshot.
Then the "Element descriptor" should look the same as before
This uses the 'Image Diff' approach. You specify an element and render the browser output of that element to an image. The next time you run the test, you do the same and check to see if the two images differ. As mentioned many times before, this technique is 'content-fragile' but can be useful for a specific subset of tests or when you have mocked content. It can also be particularly useful if you have a 'living styleguide' as described by Nico Hagenburger. I've got some ideas about CSS testing on living styleguides that I'll need to write up in a later post.
Future Steps
Off the top of my head, there are a couple of other generic steps that I think would be useful in this project.
Then the "Element descriptor" should have a "property" of "value1", "value2", ..., or "valueN"
This variation on the computed style measurement allows an arbitrary-length list of values. As long as the element being tested matches at least one of the rules, the step counts as a pass. This could be used to ensure that all text on a site is one of a certain number of font-sizes or that all links are from the predefined colour palette.
Then the "Element descriptor" should look the same across different browsers.
This would build on the existing image diff step but include multiple browser runners. Just now, the image diffs are performed using PhantomCSS which is built on top of PhantomJS which is Webkit-based. This would ideally integrate a Gecko renderer or a Trident renderer process so that the images generated from one could be checked against another. I still feel that image diff testing is extremely fragile and doesn't cover the majority of what CSS testing needs to do but it can be a useful additional check.
The aim
I'm hoping this can sit alongside the other testing tools gathering on csste.st where it can help people get a head-start on their CSS testing practices. What I'm particularly keen on with the GhostStory project is that it can pull in other tools and abstract them into testing steps. That way, we can take advantage of the best tools out there and stuff it into easily digested Cucumber sandwiches.
Try it
The GhostStory project is, naturally, available on GitHub. More usefully, however, I've been working on a fork of SpookyJS that integrates GhostStory into an immediately usable tool.
Please check out this project and let me know what you think. I might rename it to distinguish it from the original SpookyJS if I can figure out exactly how to do that and maintain upstream relationships on GitHub.