Well, kinda. Before giving my Automated CSS Testing talk at CSS Summit in July, I recorded a video of it as a backup. If everything fell apart during my presentation, Ari could seamlessly switch over the One I Made Earlier. Fortunately, the Internet did what it does best and just worked fine.
That means I have an Automated CSS Testing video all ready and waiting that nobody's seen!
Yes, it does get progressively darker as you watch. That's not your eyes playing tricks on you, that's me sitting on my balcony recording as the sun went down.
While developing the website for Hardy, it seemed obvious that I should be writing Hardy-based tests for it. What better way to figure out how to simplify testing than see what bits of the process were the hardest?
The site is hosted on GitHub using the gh-pages branch of the hardy.io project. I've played with various toolchains in GitHub pages in the past - csste.st uses Wintersmith, for example - but wanted to follow my own best practice suggestions and automate the process of getting an idea from inside my head, through the text editor, through a bunch of tests and out onto production as much as possible. The main Hardy project uses Travis CI to run unit and acceptance tests on every commit so it seemed obvious to use it for this. What I've ended up with is what I think is a nice, simple process. This is designed for sites hosted on GitHub Pages but is applicable to other sites run through Travis,
The manual steps of the process are:
Make change in your website master branch
Preview changes locally
Commit changes and push to GitHub
After this, the automation takes over:
Travis pulls latest commit
Builds website project to dist folder in master branch
Launch web server on Travis
Run Hardy tests against local web server
On successful test run, push (via git) to gh-pages branch
The full process is used to build hardy.io. Have a look at the .travis.yml, package.json and post_build.sh files mostly.
When GhostStory first came out, it grabbed a fair bit of interest. Unfortunately, despite the number of downloads, I got the impression actual usage was low. There were quite a few stars on the project, a couple of forks and no issues. That's not a good sign. If your project has no issues, it doesn't mean your codes are perfect, it means nobody's using them.
After asking on Twitter and generally 'around', it emerged that, although people liked the idea,
initial setup was too tricky
test maintenance was a hassle
it’s not WebKit that needs tested most
Number 3 might seem odd but it has become a fact that, due to the excellent tooling, most web developers use a WebKit variant (chrome, chromium, safari, etc) as their main browser when building. This means they do generally see the problems there where they might miss the ones in IE. This isn't to say WebKit shouldn't also be tested, but GhostStory was built on PhantomJS - a WebKit variant - and therefore only picked up problems that occurred there.
I've been working evenings and weekends for the last couple of several months to improve the CSS testing setup on here.com and I think we've gotten somewhere. For a start, the name has changed...
This is the bulk of the original GhostStory project but instead of running through PhantomJS, it now uses Selenium via the Webdriver protocol. This means you can run your same CSS tests - still written in Cucumber - against any Webdriver-capable browser. This means Chrome and Firefox and Opera and PhantomJS and Internet Explorer and Mobile Safari on iOS and the Android Browser and and and…. you get the idea.
Installation
It’s a simple npm-based install:
npm install -g hardy
You can then call it by passing a folder of .feature files and step definitions.
hardy testfolder/
Any relevant files it finds will be automatically parsed.
Okay, that's most of the difficulty out of the way. A simple install, simple test run and, hopefully, a simple integration into your build system. The only thing left to solve is how to make it easier to make the Cucumber files in the first place.
Hardy Chrome Extension
It’s now as simple to make your initial test cases as opening Chrome DevTools and clicking around a bit. Well, almost. Still under development, the Chrome extension helps you make feature files by navigating around the site you want to test and capturing the styles you want to test for. A Hardy-compatible Cucumber file and the accompanying element selector map is generated ready to be dropped into your features folder.
What's missing?
Is there anything missing from this flow that you'd like added? A Grunt task to set it up? Prefer something else over Cucumber? Want a Maven Plugin instead of a grunt one? Just let me know. I'm not promising I'll do it, just that I'd like to know.
The image diff code is heavily borrowed from PhantomCSS. In fact, if PhantomCSS could be run through Selenium, I'd switch over to it straight away.
Project structure and the automatic way Cucumber files, steps and suchlike all play nicely together mostly comes from WebDriverJS and CucumberJS.
Note: It’s just Hardy, not Hardy.JS. The idea here is just to show how to setup basic CSS tests and provide one (JS) solution to the problem. It’d be nice if Hardy could include common JBehave steps for CSS testing, too. And whatever it is Ruby people use.
I've been working on a new little side project recently (more info soon) which involves a Chrome Devtools extension. These are Chrome extensions that aren't targeted at changing your usual browsing experience like a normal extension but are actually aimed at modifying or providing extra behaviour for the Chrome Devtools panel. It's a little bit meta.
The theory behind them isn't that tricky and if you have any experience with standard Chrome extensions, you'd probably 'get' devtools extensions quite easily. Unfortunately, I was coming in with no prior knowledge at all and got quite lost for a few days. The docs are very helpful and the Google Group is handy but it still took me a while.
The main area I had difficulty with was the message passing. Sending info from my panel in the devtools window to the page being inspected and vice versa proved to be... complicated. I've used message events before but it still took me a while to figure out what was going where. It seems there are three individual execution contexts:
The Panel
The Background
The Page
They each have access to different bits of the Chrome Extension API and the inspected page. I won't go into details of that here as others haveexplained it better. What I will do, though, is share this project - Devtools Extension Starter Kit. This does nothing more than create a panel containing three buttons. The first executes a snippet of JS in the context of the inspected page, the second attaches a complete JS file to the inspected page (allowing execution) and the third replaces the entire inspected page with a button. When you click on this button, it will send data back from the inspected page to the devtools panel.
Essentially, it's all the communication channels I found I needed for my extension and should be a useful starting point for anyone wanting to do something similar.
I also heard today about a new Yeoman generator that makes building Chrome Extensions easier. It would probably have made the first bit of my investigations a lot easier if I'd known about that beforehand.