The minimal-ui meta tag in iOS 7.1

iOS7 brings us yet another meta tag to use – minimal-ui. You might be able to guess what it does – it hides the majority of the browser chrome when you load, much like when you scroll:

With and without minimal-ui

With and without minimal-ui

Try it out. It’s a welcome improvement, especially after Safari’s disastrous changes with the release of iOS7, and feels like a tacit confession that shutting off the ability to hide the navigation bar (as you could in iOS6) was a mistake.

It also fixes what is easily my least favourite part of the iOS7 Safari UI: stealing tap events that happen at the bottom of the screen to show the bottom navigation bar:

 

With minimal-ui, the only way to restore the bottom navigation bar is to tap the address bar at the top.

But…

This feels a little weird. Users are now going to have different UIs depending on what meta tag the site does or doesn’t have. If users can’t rely on tapping the bottom of the screen to bring up the bottom bar, why have that functionality at all? Surely it would be better to be consistent.

It’s also curious that Apple went to these lengths to create a new meta tag while still not supporting the JavaScript Fullscreen API. Many of the people looking to hide the browser UI are making interactive experiences like games, and being able to go full screen would be even better than minimal-ui – as well as being an actual cross-platform solution.

For now we’ll have to throw the Fullscreen API on the pile marked “Please, Apple. Please“, along with WebRTC and WebGL. But the minimal-ui meta tag is at least a start.

WebRTC DataChannels or: How I Learned To Stop Worrying And Flap My Arms

I had an idea. A stupid idea.

Screenshot 2014-03-07 10.16.50I had discovered that mobile browsers have JavaScript APIs for accessing accelerometer and gyroscope data and wanted to play around with the concept – I hadn’t seen them used in many places. So naturally my thoughts turned to creating a version of Flappy Bird where you paired your phone and computer together, then used the phone as a Wii-like controller to flap your arms.

Now, hear me out – it made sense. Sort of. After the meteoric rise and disappearance of Flappy Bird there were dozens of open-sourced clones out there. And a ‘flapping’ motion seemed like it would be relatively simple to detect. But there was a problem. How do I pair the computer and phone together? My first thought was to use WebSockets – so off I went. One npm install later I had socket.io installed on a server. Pretty quickly I had a system set up where the ‘host’ (that is, the big-screen device) is assigned a random number, and the ‘client’ (phone) prompts the use to enter it. Then it just takes a simple chunk of code to get them talking:

So far, pretty simple. Try it out! …and get very, very frustrated very quickly.

The latency is too damn high

You flap your arms. Anything up a second later the bird might do the same. So, why is it so slow? This is where WebSockets shows its weakness – it still relies on a central server to post messages between clients. And that central server is in Amazon’s data centre in Northern Virginia. Even from my desk in New York, the route there isn’t exactly simple:

doneI dread to think what it’s like in Europe. A better solution was needed. I started googling “peer to peer websockets” and discovered a Stack Overflow question that led me in the direction of WebRTC.

But WebRTC is for webcams

I’d read about WebRTC before, in the context of replacing Flash and Skype for video chats. In my ill-informed way, that’s what I thought the “communication” part in “real time communication” meant. But no – it also has a capability called DataChannels, which are exactly what they sound like: peer to peer data connections. You can do a variety of things with them  like share files or instant message, but let’s focus on the very important goal of making this arm flapping game more responsive.

“Marco”

So, a utopian server-less future? Unfortunately not. WebRTC’s peer to peer communication can do many things – but find other peers is not among them. So we still need a server to match client with server – just like we’re doing with WebSockets.

For this, I turned to PeerJS – a combination of node-powered server and client-side JS library that wraps around WebRTC functions. So, at this point I have both socket.io and PeerJS running on my server, and socket.io and PeerJS client libraries on the client. Feels like a waste. So we should get rid of socket.io now, right?

“Polo”

Wrong. For two reasons:

  1. WebRTC DataChannel browser support is not great. Especially on mobile – Apple deserves a lot of shaming for not supporting WebRTC at all.
  2. Peer to peer connections are tricky. Maybe you’re behind a firewall, or maybe there’s a NAT in the way. With WebRTC you can use a TURN server as a proxy between two clients, but in our case we’re already doing that with WebSockets.

So, we’ll keep WebSockets as a backup. Negotiate the pairing with WebSockets, then ‘elevate’ to WebRTC if both clients are capable and accessible – as demonstrated in this pretty awful set of diagrams:

There’s an added benefit here: the server is only used to create connections. Once that’s done, the clients disconnect, freeing up server resources for other players. If at any point in that process the WebRTC route fails, we just keep WebSockets open and communicate through that.

Our simplified code is only slightly more complex:

So, try it out. I think you’ll prefer it. Faster reactions, fewer server resources used, happier users. Unless they’re using an iPhone, of course.

Postscript: The Source Code

If you want to take a peek under the hood at FlappyArms, all the code is on GitHub. It’s really messy for the time being (being a two-day hack project and all), but I’m still adding features to it, and I hope to get it tidied up along the way.

How many people are using your site offline?

Back in the mists of the mid-to-late 2000s, life was simple. Users had a keyboard and a mouse. They had a screen that was wider than it was tall and usually had a resolution of at least this by that. And they were online. Of course they were – how else would they load your page?

Nowadays nothing is sacred. People browse on their phones, turn them sideways, shake them, swipe, pinch, tap and drop them in the toilet. You have to accommodate it all (well, perhaps not the last one), but nothing is trickier than connectivity. Most of the time users are online. But sometimes they’re offline. I saw a guy on the (underground, offline) subway today, paging through some tabs he’d preloaded onto his iPhone. His experience wasn’t great – he tried loading up a photo from its thumbnail, but the full-size version wasn’t cached offline. He was looking at some related links, but couldn’t tap any of them. He closed the tab and moved onto the next one.

Now, I know what you’re thinking (apart from “wow, you creepily stare at people’s phones on the subway?”) – he should just download an app! Offline storage, background syncing – it’s the ideal solution. Except he wasn’t using one. Maybe it’s ignorance. Maybe he doesn’t like apps. Either way, we’re in the business of catering to users, not telling them that they’re wrong, so it got me thinking – how many people do this? Should I worry about this? I have no idea.

Getting an idea: the OfflineTracker

So, let’s track this. Where to start? Well, the good news is that browsers have implemented some new events on the window object – online and offline. They’re far from perfect. The only truly reliable method is to regularly poll a remote resource to see if you can reach it – like offline.js does. However, firing up a cell data connection every 30 seconds will drain a phone’s battery, and is A Bad Thing. So we’ll make do. I made a quick gist:

https://gist.github.com/alastaircoote/9350466

(in CoffeeScript, see it in JS if you like) with a concept for this tracking. It’s more of a proof of concept than production-ready code, but the general flow is this:

  1. User goes offline. We record the current date and time as the ‘start’ of our track, and store it in localStorage.
  2. User comes back online. We stop tracking, update the end time to be the actual end time, and run the function provided to send this data wherever we want it to go.

Now, there are a few holes in this. So, we also do the following:

  • Update the ‘end of tracking’ time every 30 seconds. In theory we should be able to catch any and all events that would signify the end of tracking, but we can’t (what if the browser crashes? What if the user turns off their phone?). So every 30 seconds we update the ‘end’ of the tracking, and save to localStorage. If all hell breaks loose, our numbers will only be up to 30 seconds off.
  • Hook into the Page Visibility API. This is an event that tells us when the user moves away from our page (usually by changing tabs). This is pretty crucial because it’s going to stop us tracking time when we’re offline and in an inactive tab.
  • Provide a callback to the save function. We’re dealing with bad connectivity – we can’t guarantee that our data will get saved correctly. So we don’t delete our tracking data until the function runs the callback provided.
  • Check on page load for any saved data. So when subway guy views your page offline, closes the tab and moves onto the next thing you can still recapture that data the next time he visits your site.

So, now you have the start and end times of user offline browsing sessions. What you do with it is up to you – maybe only 0.5% of your users access your site this way and you shouldn’t care at all. But maybe your user base consists entirely of cave-dwellers. Either way it would be good to know, right?

The undecided fate of local news

Henry Blogdet bought a newspaper today. He bought it because it contained a story he couldn’t read elsewhere- as he quite rightly states, this whole “write original content and get people to pay for it” concept is an integral part of the future of the news industry. The problem is, how often does that paying part happen? Henry is not going to buy a subscription to the Inquirer and Mirror- he only wanted to read one story. Local newspapers can’t survive on people buying one copy every three months.

IMG_20130727_114036So, a counter example, of sorts. A few years ago I lived in Victoria, British Columbia and would frequently walk past the offices of the Times Colonist newspaper, dreaming of a day when I could work there. So, when I visited the city again recently I was disappointed to find that my former dream desk probably isn’t even owned by the Times Colonist any more- their ongoing difficulties have forced them to lease out half of their office.

Like many struggling media organisations, the Times Colonist has opted to put a paywall on their online content. While there aren’t any official figures to gauge success, my conversations with friends in the city (a demographic that admittedly skews younger than Victoria’s average age) suggest that people aren’t buying. Put simply, they don’t think that the Times Colonist provides $9.95-worth of content each month. And indeed, how can it? Newspapers like the New York Times, Washington Post and the Globe and Mail offer unparalleled coverage from international news desks, recipes, book reviews and much more- for just a few dollars extra.

But they tend not to cover current events on Vancouver Island, Canada. People want and need that coverage. How can local newspapers stay alive? Indulge me while I describe an entirely outlandish and unrealistic concept.

Few local newspapers write their own international (or even national) coverage- for instance, the Times Colonist gets its international stories from the AP and national ones from The Canadian Press. But that’s never really highlighted, because you don’t really want to show your readers that the paywalled content they are reading is available for free elsewhere.

So let’s turn that on its head. Don’t syndicate content that is available for free. Let’s have the paywalled big guys- the New York Times, the National Post, the Washington Post, whoever- make their content available for syndication. Have them host it if you like, but more importantly, keep the branding. Show your readers that they’re getting pulitzer-winning coverage as part of their subscription to the local newspaper. In this model few people outside of the Beltway have a subscription to the Washington Post, and few outside of New York have a subscription to the Times. People in Victoria subscribe to the Times Colonist, with national news presented by The National Post (by cruel twist of fate, the owners of the recently-paywalled National Post used to also own the Times Colonist- you have to imagine it would have been a very straightforward syndication deal if they still did).

The change to the local newspapers would be relatively minimal- they retain their existing relationships with customers and the city that surrounds them. They would have to charge more for subscriptions, but would be providing far more value along with it. The bigger organisations, on the other hand, would be facing sea change. Suddenly the majority of their money would come from business to business transactions, not business to consumer- the impact of such a change can’t be overstated. And while they would still sell a complete newspaper/online offering to local customers, the vast majority of readers would consume content in a much more piecemeal fashion (though I’d argue this is already happening online today).

Doing something like this would involve a large newspaper risking everything by going all-in on an utterly unproven model – and I don’t think we’re at that level yet. And any newcomer faces a chicken and egg scenario- no-one is going to buy your content until you have a brand to back it. But you can’t build a brand without people buying your content.

So this post is really just an idle thought exercise. But the question remains- local newspapers are a vital part of the news industry, how do they reinvent themselves and stay with us?

Charts can say anything you want them to

I just read an article on Business Insider that charted the downfall of MSNBC. It was a fascinating read, but I couldn’t help but get distracted by the charts they used, especially given that the article title was:

The Stunning Downfall Of MSNBC In Five Charts

Powerful stuff. The problem is that the charts are deceptive. For example, the first one:

Screen Shot 2013-06-05 at 12.37.14 AMOof. MSNBC has lost over half it’s audience. Except, wait, it hasn’t. The chart starts at 48,000, and only scales another 18,000. Here’s what that chart should look like:

Every chart on the page is the same (though to a lesser extent) - the only exception was the chart showing Fox News’ huge lead in audience, which correctly starts at zero:

Screen Shot 2013-06-05 at 12.42.08 AMI’m not trying to attribute malice here- as I was putting that example chart together, I was surprised to see that Google Docs does this automatically, so it could well be a simple mistake. But consider this a reminder: always check your axes, otherwise you might end up misleading your readers.

 

Crunching subway data- a New Yorker’s busiest stations

There are many reasons to complain about the subway system here in New York. It’s underfunded, the air conditioning breaks, and if you’ve ever tried relying on the G line you’ve probably ended up with a deep, serious commitment-phobia. But there are many bright spots of the subway system, and as a tech-head developer I’d like to draw your attention to one in particular- data.

The MTA makes a ton of data available. The entire subway and bus system are available as GTFS feeds, allowing you to set up your own instance of OpenTripPlanner for all your subway routing needs- something I used in the aftermath of Sandy to set up an emergency trip planner (and OpenPlans then used to create some great heatmaps). It has data on each and every Metrocard swipe in the city, and, er, Pantone colours for each of the subway lines. It also has a crazy amount of data on each subway turnstile in the city, which is what I’ve been playing around with lately.

The most popular stations in New York are already known- the MTA themselves has them listed. Unsurprisingly, Times Square tops the list by a large margin, with every other large station following closely behind. They don’t share how they came to that number, but I assume that it includes everyone who travels on the subway- commuters, tourists, even the Mariachi guys traveling from train to train. For a heatmap side project I’m currently working on, I wanted to know what the most popular weekday, commuter stations are.

So I downloaded some turnstile data from mid-January to late April of this year to use as my sample set. The format the MTA uses is… weird, to say the least. The basics are there- it doesn’t record every turnstile turn, but rather keeps cumulative totals during the day- 3am, 7am, 12pm, and so on. For some reason it has eight repeating sets of columns that should really be rows, so I threw together a quick and dirty node.js script to flatten these out (and merge all my CSV files into one) and imported the data into a Postgres database.

First off, I needed to get my commuter totals. I did this by creating a view that ran an exceptionally messy SELECT statement which selected the first row of exit data available after or on 3am, then matched it up with the first result after 11am- while excluding all weekend results:

As I said, awful SQL. If anyone has any suggestions for improvements I’d love them. But it worked. For each turnstile, I now have the number of exits taken during the morning rush hour(…ish). Unfortunately, I quickly realised that I’d need to clean the data up- it appears that at certain points, turnstiles just go absolutely haywire and you end up with -200000 exits on one day, which can really mess with your totals. I discovered that I could easily chop this data out just by calculating how far that day’s result deviated from the turnstile’s overall median. The anomalous results were so different that I could set the cut off point at 10x the median and still exclude them.

With that done, it was only a short step to aggregate the data up to the station level, and discover the most popular commuter stations, or, The Stations With The Most Turnstile Exits During Peak-ish Hours on Weekdays:

  1. 14th St Union Square
  2. 42nd St Grand Central
  3. 42nd St Times Square
  4. 34th St Penn Station
  5. Fulton St
  6. 47-50th St Rockefeller Plaza
  7. 34th St Herald Square
  8. 23rd St (6)
  9. Chambers St
  10. 59th St – Columbus

So while many of the results are similar to the overall station popularity, there are some definite differences- Union Square jumping to the top being one of the most noticeable. Be careful not to take too much out of these numbers- as I said, it’s based on a limited dataset of a few months. And I’d welcome any corrections on my working from people smarter than myself!

Customizing your iOS webapp icon- per user

I threw together a little mobile subway-themed webapp hack last weekend, called Subwalkway. I had to make a quick icon for it, and my immediate thought was to make it look like an NYC subway route sign. Luckily for me, the W line was decommissioned a few years ago, so it’s free for me to steal. But what colour to use? Or should I make some hideous beach ball of all of them? No- that’ll just remind me how slow my Macbook is these days. But I got to thinking- maybe I don’t need to choose. The icon is specified with a <link/> tag after all, why don’t I just randomise it? So I did:

Gotta catch 'em all!

The logic is very simple/stupid. On page load, I run a Math.random(), and use that number to choose one of the items in an array of file names. Set the <link rel=”apple-touch-icon-precomposed” /> field, and we’re done.

Obviously my example is a little pointless, but it serves as a proof of concept- iOS will respect whatever icon metadata changes you make after page load. So, there are some more reasonable applications out there- if I expanded the app to Boston, say, I could modify my app icon to better fit the Boston T style. Or if your app works in numerous countries/cities, you could make an Apple Maps-style icon- only with local landmarks.

Dear Google, let’s talk about webapps.

screenshotI threw together a little hack project last weekend, called Subwalkway. It’s mobile-only, and it’s a bit of a mess- part UI experiment, part subway navigation tool, and rough around the edges. But if you’re on an iPhone, try adding it to your home screen. It looks right, doesn’t it? An icon (with an easter egg!) that seamlessly blends with the phone interface, a splash screen when you launch it, and no navigation chrome when you’re using it. If you squint a little,  you could almost imagine that it was a native app.

Now try doing that on an Android phone. Actually, don’t bother, I’ll save you the effort- it does precisely none of these things. If you’re using Google’s Chrome browser you can’t even add a site to your home screen*. So, today I ask: Google, what the hell? From email to entire office suites, you’ve spent years trying to convince us that the web is the future for software- you even went as far as to create an entire OS based around it. We could be making responsive webapps that work great on ChromeOS and adapt to being perfect first-class citizens in Android- if you let us. Instead we’re forced to provide sub par in-browser experiences, or wrap our apps up in a clunky WebView frame and lose all of the performance and automatic updating that HTML5 can provide.

Screen Shot 2013-05-10 at 10.34.07 AMOther phone manufacturers are catching up to the game- even though WebOS is dearly departed, Blackberry 10 lets us make HTML-based apps. Firefox OS makes them a first class citizen. Even Microsoft allows you to write apps in JavaScript. Why has it been left to Apple- who have every incentive to trap me in the sickly embrace of their Objective C App Store- to be the pioneer in webapps?

I want to make cross platform apps using HTML technologies, and I want to make them great. How can it be that Google is the one standing in the way of me achieving that?

(Hat tip to Peter Nixey, whose blog post title I shamelessly ripped off)

* As pointed out in the comments, it actually is possible. But it’s little wonder that I never discovered it, given the steps required.

How I served 100k users without breaking the server- or a dollar bill.

Or, “The incredible, affordable S3″

Screen Shot 2013-04-26 at 11.41.51 AMI made a silly thing. The Associated Press’s Twitter account had just been hacked and sent out a fake tweet about an explosion in the White House. The Dow Jones immediately dropped 150 points. So I made a silly thing. It’s called “Is My Twitter Password Secure“, and the best description I could probably give it is a PSA on phishing sites. Try it, you might laugh. If you’re like many people, you’ll be so convinced by it that you’ll send me abuse on Twitter. That’s OK.

But anyway, it “went viral”. Tweet after tweet arrived in my “mentions” box as people enjoyed the joke and shared it with their friends. Despite being hammered with requests, the server never slowed and never crashed. Because there was no server. At least, not that I had to worry about, because I hosted the entire thing on Amazon S3. Not only is it super-reliable, it’s also super cheap- serving 758,509 requests cost me… thirty cents.

Screen Shot 2013-04-26 at 11.41.58 AMOf course, you can’t run any dynamic scripting on S3, but you’d be surprised the number of times you don’t have to. For example, the promo site for my taxi app does use dynamic content- the map tile images are generated by an EC2 instance I have running TileStream- but the vast majority of the page runs quite happily on S3. JSONP or CORS mean that you can quite effectively run an ‘API’ server on an EC2 instance, while leaving the majority of your static HTML on an S3 bucket.

While the steps to set this up aren’t complicated, I thought it might be worth creating The Definitive Guide To Hosting A Web Site On S3.

The Definitive Guide To Hosting A Web Site On S3

The steps are actually really quite simple- make a bucket, set up a CNAME, upload your files. But let’s go one by one:

Make a bucket

Every domain name you want to use has to be a different bucket. Make the name of the bucket the exact domain name you wish to use (including the subdomain, like www.):

Screen Shot 2013-04-26 at 11.25.36 AM

Then you need to make this bucket publicly browsable. Select your bucket, and open up the Properties tab. Under ‘static hosting’, you just need to check “Enable website hosting”:

Screen Shot 2013-04-26 at 11.29.21 AMYou’ll notice that there is also a box for ‘index document’ – you ought to be fine to leave this as it is (no point trying to host PHP files on here, folks) but if you’re one of those people you might need to change it to “index.htm”.

Ta-da! You now have a static web site up and running.

Upload your files

Unfortunately, S3 doesn’t offer anything so simple as FTP access. That said, there are many clients out there set up for S3 uploads- my personal favourite is Cyberduck, it’s donationware, and supports just about every uploading scenario you could wish for. All it needs is your AWS API key and it’ll list all your buckets out for you, and let you drag and drop your files straight into your bucket.

Get a better domain name

That endpoint URL is pretty gross. Thankfully, it’s very simple to get a better one mapped to your S3 site. I’m using Namecheap in these screenshots, but any DNS provider ought to be able to do the same thing. Go to edit your DNS records, and add a CNAME record for your chosen subdomain that points to your gross endpoint URL:

Screen Shot 2013-04-26 at 11.37.51 AM

That ‘IP ADDRESS/ URL’ field’s full value is www.ismytwitterpasswordsecure.com.s3-website-us-east-1.amazonaws.com. – that last full stop on the end is important. And the URL redirect above simply directs all users to the www subdomain if they haven’t already entered it.

And that’s it. Your static site is up and running on your pretty domain name. Now you’re free to play horrible tricks on the world without worrying that they’ll crush your server and/or wallet in return. Use this power wisely.

So, you’re going to do the StartupBus.

Congratulations, you are one of the least rational people I know. That’s a good thing. But you need to prepare yourself for what’s coming- it’s an amazing experience and you get to celebrate at SXSW afterwards, but before that comes the most intense bus journey of your life. I should know- I did it last year, and I thought I’d pass on what little advice I have for those who set off in a few days.

You will be stressed. Your teammates will be stressed. You will argue. That’s OK.

After you get into teams, you’ll have a glorious three hours or so where you’re unstoppable. Then you’ll start getting down to details, and people will have different ideas about what direction to take. You’ll have a healthy debate about it. A few hours later, you’ll be sleep deprived and suddenly the smallest question will become an existential debate and you’re right damnit, why won’t everyone else realise that? If it’s 3am and you’re yelling in each other faces about what colour gradient to use in your logo, go to bed. Even if you come to a final decision you’ll probably all wake up the next morning and think it’s wrong.

The important part isn’t trying to avoid arguing (because it will happen), it’s waking up the next morning, forgiving each other for being such assholes and just getting on with it. It’s no coincidence that one of the most successful teams on the NYC bus last year was the one making Happstr.

The internet connection is going to be terrible. No, worse than that.

I was told before I left that the internet would be patchy. “Ah, it’ll be fine”, I thought to my idiot self as I pitched an idea based around streaming music from the cloud. It was a huge, huge mistake- even when we did have a working connection, it could take up to thirty seconds just to start buffering a song. When we didn’t, well, I couldn’t do a thing. So, a few developer-specific tips:

  • Don’t work on an idea that needs to stream lots of data. If you do (don’t), then make sure you can complete the entire process locally. Don’t use, say, the Spotify, Rdio or Youtube APIs, to pick examples at random.
  • Whenever you do use an external API, download sample responses for every call you make, and ideally give yourself a switch between live data and locally cached stuff.
  • Download as much documentation as you can. I was extremely glad to have offline copies of both the jQuery and jQuery Mobile documentation, for example. If in doubt just clone entire GitHub repos at random. You’ve got the disk space.

Be sociable.

Yes, the StartupBus is a competition and you should be very focused on the project your team is creating. But the chances are that the real, long-term benefit you’re going to get from this trip is the people you’re going to meet and the connections you’ll make. So, you know, hang out. If it’s anything like last year, you’ll have numerous chances to meet people from other teams and other buses as you make your way to Austin. Take advantage of that. And although you’re competitive, the other teams aren’t your enemy. Talk to each other about your ideas and get opinions.

Then, when you’ve all arrived in Austin and the competition is over, you can all sit around and laugh about it, as if it was a weird blurry, sleep deprived dream. Because it kind of will be.