One of the interesting things about web development is the number of different layers involved. On a low-down technical level, there are bits clogging the tubes, higher up you have TCP/IP then HTTP. A little further up still and you have the HTML of the page and eventually, on the top, there's what it looks like when it arrives on the user's screen. A lot of debugging can be done at the source code level but there is a level between that and the way it looks - the Document Object Model (DOM). This is web page as it appears in the 'mind of the browser' and some of the most useful debugging tools allow you to look directly at it.
All the main browser platforms have tools either built-in or available as easy-to-install add-ons which allow you to play with the DOM, add elements to it, take them away, change behaviours or styles on-the-fly and generally feel like a brain surgeon sitting with an electrode in one hand and a nice, squishy brain in the other. Playing with your pages at this level is an extremely useful way to get a good feel for the structure of a page and how the browser 'thinks'.
Some of these DOM analysis tools allow access to another behind-the-scenes level of the process - network traffic. Every file - HTML, JavaScript, image, CSS, XML, etc - starts at the server at one end and finishes at the browser. Using a network analysis tool is like standing at the front door of a party watching everyone come in. You can see where they came from and what they looked like when they arrived so that if they look different later on, you can tell they've had a few too many trips to the punch bowl. Or something. I think the analogy went a bit wrong somewhere. You know what I mean.
There's a good in-depth but understandable article of how browsers actually work on the HTML5 Rocks site.
I had an idea around Christmas for a new game design. it was basically to be a single or multi-player game where you would tie knots in a piece of ‘string’. The single-player game would focus on discovering new patterns while the multi-player one would be a time-based challenge where each would tie a knot, pass it over to their opponent and the first to untie it would win.
I built a basic prototype and had some fun inventing a few interesting knots (my favourite being ‘Yoda’) but didn't get round to doing anything with it. As I'm tidying up all my projects before immigrating to Germany, I figured I should just go ahead and put this out into the world.
The system is built using an invisible 3x3 grid and 4 quadratic curves. The curves' start, control and end points are the centre of a tile. When you move a tile, it basically swaps position with the tile nearest where you drop it. This can also be done with multiple tiles simultaneously if you're on a multi-touch device. You can see the tiles if you enable debug mode. You can also switch between the two colour schemes it has at the moment.
Yoda in Knot form
The only addition I've made to it since Christmas was to add on a system to allow players to save knots back into the system. I've only built in 22 patterns so if you make a nice, interesting or funny one, give it a recognisable name (nothing too rude, please) and save it, it will then be available to everybody who plays with it. You can also set whether the pattern is to count as ‘matched’ when rotated 90°, 180°, flipped vertically or flipped horizontally. Calculating the isomorphisms (when two knots look the same to the viewer but aren't necessarily the same to the computer) was probably the trickiest bit of the system.
Go, make.
If you're interested in taking the idea further, grab the code from GitHub and make something from it. The usual rules apply, if you make something from it that makes you a huge tonne of neverending cash, a credit would be nice and, if you felt generous, you could always buy me a larger TV.
There are quite a few regulations to bear in mind and comply with when developing a website for a Government organisation or any large public body. This has lead to a lot of sites being developed in a very defensive manner, ensuring safe compliance at the expense of a great and consistent user experience.
This video features a presentation about how large publicly-liable institutions should and can embrace the latest in web technologies without sacrificing standards. By embracing them, in fact.
The content of this was developed while planning and building the National Museums Scotland website which launched last November. The messages presented are applicable to museums, galleries, swimming pools, councils, anywhere, really.
If you're a techie person in the cultural or government sector, you might find this useful in convincing others to take advantage of the latest cool (and useful) technologies.
The source for the slides is available online although it's mostly webkit-friendly. I realise the irony of a presentation about cross-platform HTML5 not being a great example itself of that but it does degrade adequately. If I get the time in the future, I'll tidy it up. An actual good (and relevant) example of cross-platform web technologies is the National Museums Scotland website itself which performs fantastically across all manner of platforms.
Twitter has an amazing amount of data and there's no end to the number of ideas a coffee-fuelled mind can come up with to take advantage of that data. One of the greatest advantages of Twitter – its immediacy, its currency, its of-the-moment-ness – is, however, also one of the drawbacks. It is very easy to find out what people are thinking about now, yesterday or a few days ago but trying to find something from more than a week ago is surprisingly tricky. The standard, unauthenticated search only returns the last 7 days of results, beyond that, there's nothing. You can get around this by building custom applications which authenticate themselves correctly with Twitter and provide a level of traceability but that's quite tricky.
The easiest way to archive search results for your own personal use is actually surprisingly simple. Every search results page includes an RSS feed link1. This will be updated everytime there are new results. Simply subscribe to this feed with a feed reader (Google Reader is simple and free) and your results will be archived permanently. This is great if you're searching for personal interest stuff but doesn't work so well if you want to share the results with the public.
This was the problem I was presented with when I was asked to build a tweet-archiving mechanism for Museum140'sMuseum Memories (#MusMem) project. Jenni wanted some kind of system that would grab search results, keep them permanently and display them nicely. I didn't want to create a fully OAuth-enabled, authenticated search system simply because that seemed like more work than should really be necessary for such a simple job. Instead, I went down the RSS route, grabbing the results every 10 minutes and storing them in a database using RSSIngest by Daniel Iversen. The system stores the unique ID of each Tweet along with the time it was tweeted and the search term used to find it. The first time a tweet is displayed, a request is made to the Twitter interface to ask for all the details, not only of the tweet, but also of the user who tweeted it. These are then stored in the database as well. This way, we don't make too many calls to Twitter and we don't get blocked.
If you want your own Tweet Archive, I've put the code on GitHub for anyone to use. It requires PHP, MySQL and, ideally, a techie-type to set it up.
Archiving Tweets - Non-technical
With the technical side out of the way, we're left with the human issues to deal with. If you're automatically saving all results with a particular phrase, all a malicious person needs to do is include that phrase in a spam-filled tweet or one with inappropriate language and suddenly, it's on your site, too. If you aren't going to individually approve each tweet before it is saved, you must keep a vigilant eye on it.
The other thing which has turned out to be a problem is the Signal-to-Noise ratio. When we initially decided on the hashtag #MusMem, nobody else was using it. To use Social Media parlance, there was no hashtag collision. The idea was to encourage people to use it when they wanted their memories stored in the MemoryBank. Unfortunately, it is now being used by anyone tweeting anything related to Museums and Memories. This is particularly troublesome at the moment as this month is International Museum month, one of the themes of which is ‘Memory’ (which is why we built the MemoryBank in the first place). This means that the permanent memories we want stored (the Signal) are getting lost in the morass of generic Museum Memories (the Noise). There is no way to solve this problem algorithmically. If we are to thin it down, we actually need to manually edit the several thousand entries stored.
If anyone can think of a solution to this issue, please let everybody know – the world needs you.