When GhostStory first came out, it grabbed a fair bit of interest. Unfortunately, despite the number of downloads, I got the impression actual usage was low. There were quite a few stars on the project, a couple of forks and no issues. That's not a good sign. If your project has no issues, it doesn't mean your codes are perfect, it means nobody's using them.
After asking on Twitter and generally 'around', it emerged that, although people liked the idea,
initial setup was too tricky
test maintenance was a hassle
it’s not WebKit that needs tested most
Number 3 might seem odd but it has become a fact that, due to the excellent tooling, most web developers use a WebKit variant (chrome, chromium, safari, etc) as their main browser when building. This means they do generally see the problems there where they might miss the ones in IE. This isn't to say WebKit shouldn't also be tested, but GhostStory was built on PhantomJS - a WebKit variant - and therefore only picked up problems that occurred there.
I've been working evenings and weekends for the last couple of several months to improve the CSS testing setup on here.com and I think we've gotten somewhere. For a start, the name has changed...
This is the bulk of the original GhostStory project but instead of running through PhantomJS, it now uses Selenium via the Webdriver protocol. This means you can run your same CSS tests - still written in Cucumber - against any Webdriver-capable browser. This means Chrome and Firefox and Opera and PhantomJS and Internet Explorer and Mobile Safari on iOS and the Android Browser and and and…. you get the idea.
Installation
It’s a simple npm-based install:
npm install -g hardy
You can then call it by passing a folder of .feature files and step definitions.
hardy testfolder/
Any relevant files it finds will be automatically parsed.
Okay, that's most of the difficulty out of the way. A simple install, simple test run and, hopefully, a simple integration into your build system. The only thing left to solve is how to make it easier to make the Cucumber files in the first place.
Hardy Chrome Extension
It’s now as simple to make your initial test cases as opening Chrome DevTools and clicking around a bit. Well, almost. Still under development, the Chrome extension helps you make feature files by navigating around the site you want to test and capturing the styles you want to test for. A Hardy-compatible Cucumber file and the accompanying element selector map is generated ready to be dropped into your features folder.
What's missing?
Is there anything missing from this flow that you'd like added? A Grunt task to set it up? Prefer something else over Cucumber? Want a Maven Plugin instead of a grunt one? Just let me know. I'm not promising I'll do it, just that I'd like to know.
The image diff code is heavily borrowed from PhantomCSS. In fact, if PhantomCSS could be run through Selenium, I'd switch over to it straight away.
Project structure and the automatic way Cucumber files, steps and suchlike all play nicely together mostly comes from WebDriverJS and CucumberJS.
Note: It’s just Hardy, not Hardy.JS. The idea here is just to show how to setup basic CSS tests and provide one (JS) solution to the problem. It’d be nice if Hardy could include common JBehave steps for CSS testing, too. And whatever it is Ruby people use.
I've been working on a new little side project recently (more info soon) which involves a Chrome Devtools extension. These are Chrome extensions that aren't targeted at changing your usual browsing experience like a normal extension but are actually aimed at modifying or providing extra behaviour for the Chrome Devtools panel. It's a little bit meta.
The theory behind them isn't that tricky and if you have any experience with standard Chrome extensions, you'd probably 'get' devtools extensions quite easily. Unfortunately, I was coming in with no prior knowledge at all and got quite lost for a few days. The docs are very helpful and the Google Group is handy but it still took me a while.
The main area I had difficulty with was the message passing. Sending info from my panel in the devtools window to the page being inspected and vice versa proved to be... complicated. I've used message events before but it still took me a while to figure out what was going where. It seems there are three individual execution contexts:
The Panel
The Background
The Page
They each have access to different bits of the Chrome Extension API and the inspected page. I won't go into details of that here as others haveexplained it better. What I will do, though, is share this project - Devtools Extension Starter Kit. This does nothing more than create a panel containing three buttons. The first executes a snippet of JS in the context of the inspected page, the second attaches a complete JS file to the inspected page (allowing execution) and the third replaces the entire inspected page with a button. When you click on this button, it will send data back from the inspected page to the devtools panel.
Essentially, it's all the communication channels I found I needed for my extension and should be a useful starting point for anyone wanting to do something similar.
I also heard today about a new Yeoman generator that makes building Chrome Extensions easier. It would probably have made the first bit of my investigations a lot easier if I'd known about that beforehand.
During all the research I did for my CSS testing talk, I couldn't help but spot another gap where a testing tool could be useful.
Cucumber
Cucumber is a technology used widely in automated testing setups, mostly for acceptance testing - ensuring that the thing everybody agreed on at the beginning was the thing delivered at the end.
This is accomplished by having a set of plain text files containing descriptions of different scenarios or aspects of the application, usually with a description of the actions performed by an imaginary user. You describe the situation (known as a 'Given' step), describe the user's action ('When') and describe the expected outcome ('Then').
The language (properly known as gherkin) used in these files is deliberately simple and jargon-free so that all the key stakeholders in the project - designers, developers, product owners - can understand but the files are also written in a predictable and structured style so that they can, behind the scenes, be turned into testable code.
What occurred to me when looking into this area was that there wasn't an agreed terminology for specifying the layout/colour/look and feel of a project in plain text. Surely this would be the perfect place to drop in some cucumber salad.
What we've got now is a project based on SpookyJS - a way of controlling CasperJS (and, therefore PhantomJS) from NodeJS - which contains the GhostStory testing steps and their corresponding 'behind the scenes' test code. There are only two steps at the moment but they are the most fundamental which can be used to build up future steps.
Implemented Steps
Here, "Element descriptor" is a non-dev-readable description of the element you want to test - "Main title", "Left-hand navigation", "Hero area call-to-action". In the project, you keep a mapping file, selectors.json, which translates between these descriptions and the CSS selector used to identify the element in tests.
Then the "Element descriptor" should have "property" of "value"
This is using the computed styles on an element and checking to see if they are what you expect them to be. I talked about something similar to this before in an earlier post. This is related to the 'Frozen DOM' approach that my first attempt at a CSS testing tool, cssert, uses but this way does not actually involve a DOM snapshot.
Then the "Element descriptor" should look the same as before
This uses the 'Image Diff' approach. You specify an element and render the browser output of that element to an image. The next time you run the test, you do the same and check to see if the two images differ. As mentioned many times before, this technique is 'content-fragile' but can be useful for a specific subset of tests or when you have mocked content. It can also be particularly useful if you have a 'living styleguide' as described by Nico Hagenburger. I've got some ideas about CSS testing on living styleguides that I'll need to write up in a later post.
Future Steps
Off the top of my head, there are a couple of other generic steps that I think would be useful in this project.
Then the "Element descriptor" should have a "property" of "value1", "value2", ..., or "valueN"
This variation on the computed style measurement allows an arbitrary-length list of values. As long as the element being tested matches at least one of the rules, the step counts as a pass. This could be used to ensure that all text on a site is one of a certain number of font-sizes or that all links are from the predefined colour palette.
Then the "Element descriptor" should look the same across different browsers.
This would build on the existing image diff step but include multiple browser runners. Just now, the image diffs are performed using PhantomCSS which is built on top of PhantomJS which is Webkit-based. This would ideally integrate a Gecko renderer or a Trident renderer process so that the images generated from one could be checked against another. I still feel that image diff testing is extremely fragile and doesn't cover the majority of what CSS testing needs to do but it can be a useful additional check.
The aim
I'm hoping this can sit alongside the other testing tools gathering on csste.st where it can help people get a head-start on their CSS testing practices. What I'm particularly keen on with the GhostStory project is that it can pull in other tools and abstract them into testing steps. That way, we can take advantage of the best tools out there and stuff it into easily digested Cucumber sandwiches.
Try it
The GhostStory project is, naturally, available on GitHub. More usefully, however, I've been working on a fork of SpookyJS that integrates GhostStory into an immediately usable tool.
Please check out this project and let me know what you think. I might rename it to distinguish it from the original SpookyJS if I can figure out exactly how to do that and maintain upstream relationships on GitHub.
This is a collection of code snippets for various common tasks you might need to accomplish with the App.net API. Most of these are focused on creating or reading geo-tagged posts. They require a developer account on app.net and at least one of an App ID, App Code, App Access Token or User Access Token. The calls here are implemented using jQuery but that's just to make it easier to copy-paste into the console to test them out (so long as you fill in the blanks).
An important thing to bear in mind is the possibility for confusion between a 'stream' and 'streams'. By default, a 'stream' is a discrete chunk of the 20 latest posts served at a number of endpoints. This is the open, public, global stream:
On the other hand, 'streams' are long-poll connections that serve up any matching posts as soon as they are created. The connection stays open while there is something there to receive the response. Streams are available under:
https://alpha-api.app.net/stream/0/streams
Totally not confusing. Not at all.
Creating a user access token
Required for any user-specific data retrieval. The only tricky thing you'll need to think about here is the scope you require.
Using a user access token to create a post (with annotations)
Requires
User Access Token
text to post
The text is essential if you don't mark a post as 'machine_only'. The annotations here are optional. Annotations don't appear in the global stream unless the requesting client asks for them.
Retrieve the global stream, including geo-annotated posts if there are any
Requires
User Access Token
This is a very basic call to retrieve the global stream but it also instructs the endpoint to return us all annotations and include machine-only posts.
client_credentials is one of the four types of grant_type specified in the OAuth 2.0 specification. I had difficulty getting this to work when using a data object:
var data = {
"client_id": "{CLIENT_ID}",
"client_secret":"{CLIENT_SECRET}",
"grant_type": "client_credentials"
};
The client_credentials kept throwing an error. Instead, sending this as a string worked fine:
One other thing to note is that this bit should be done server-side. This will throw a bunch of "…not allowed by Access-Control-Allow-Origin…" errors if you do it via jQuery.
Returns
{
"access_token": "{APP_ACCESS_TOKEN}"
}
Creating a streams format
Now you have your app access token, you can use it to tell the service what kind of data you want back. The streams offered in the API have two quite powerful aspects. Firstly, filters allow you to run many kinds of queries on the data before it is streamed to you so you don't need to recieve and process it all. Secondly, the decoupling of filters from streams means you can specify the data structure and requirements you want once then just access that custom endpoint to get the data you want back any time.
Requires
App access token
This first example just creates an unfiltered stream endpoint
Using Filters to create a stream of geotagged posts
We'll specify some requirements for our filter now so that it only returns back a subset of posts. The rules we're specfying here are:
At least one item in the "/data/annotations/*/type" field
must "match"
the value "net.app.core.geolocation"
Requires
User access token
The field is specified in 'JSON Pointer' format. Within the response, there is a 'data' object and a 'meta' object. The data contains an 'annotations' object which contains an array of annotations, each of which has a type. This is represented as /data/annotations/*/type.
Open that URL up in your browser (seeing as we're testing) and, in a different tab, create a geo-tagged machine-only post (see above). Your post will appear almost instantly after you've submitted it.