The disappointing reality of WKWebView

I’ve been playing around with embedding WebViews in native apps for some time now, but I’ve never managed to get to them to a point where I’d be happy to use them in an app people would actually use. The bad reputation is justified – aside from anything else, they’re just slow.

But all was supposed to change with iOS8. Apple surprised everyone with the announcement that it has created WKWebView – a new UIWebView alternative that runs out-of-process, JS bridges to native code, and brings the super-fast Nitro JavaScript engine along with it. Great! Except…

It isn’t finished yet.

At first look, it works great. Point it at a URL, it loads it, the JavaScript is fast… bliss. This will be great in the embedded WebViews in Facebook, Twitter, Chrome and the like. Now try loading a file you’ve stored locally. It won’t work. OK, read the HTML manually, then use loadHTMLString to put the HTML in the webview. That will work. But it still won’t load any external files, like JS or CSS.

You can, as I did, go further down the rabbit hole and embed your CSS and JS in-page, have everything work and then discover that remote requests have weird character set issues, or that WKWebView has no applicationCache functionality. As best I can tell, the more you work with it, the more issues you’ll discover that will hold you back.

So don’t use it for app views yet.

At least, that’s my recommendation anyway. For opening web content in modal views, sure. To integrate with your app? No. There are too many unknown caveats and undocumented issues. You would think that the inability to load local files would have been a notable enough problem that it would be fixed in iOS 8.1, but no dice.

I’m still optimistic, and I hope that in time WKWebView will be the saviour we’ve all been waiting for. But it isn’t yet.

Actual real, actual one pixel borders in iOS Safari.

iOS 8 has finally fixed a mobile web annoyance that’s been kicking around for years. If you want to skip the history lesson, go here for the details.

When is a pixel not a pixel? Almost always, if you’re on mobile.

Here’s the problem: in the beginning, smartphone web development was based entirely around the iPhone. While it defaulted to rendering pages as if they were on a desktop (and, indeed, still does) we could change that with a quick meta tag:

<meta name="viewport" width="device-width">

As the tag might imply, it rendered the page at a native resolution for the iPhone – 320 pixels wide. Unfortunately, this was too easy. Everyone hard-coded the concept of 320px wide pages everywhere, so when Android phones with 480px wide screens arrived, they had to lie about their device-width – they said it was 320 pixels and scaled up accordingly. With the market (or, more accurately, mind share) still dominated by the iPhone, this went mostly unnoticed.

(This is actually a great example of why the mobile web is so broken compared to native development. Market considerations will always, always win over standards compliance – if you ever expect anything different you will be disappointed. Native environments only have one implementation, so they avoid this.)

Then the iPhone 4 came stomping in and ruined everything.

Retina! It’s great. Looks fantastic. Ruined pixel measurements forever. The iPhone 4 screen was 640 pixels wide, but Apple quickly discovered what the Android team had – if you make device-width equal 640 pixels, a lot of web sites are going to look awful. So Apple lied, too. They also introduced a new property, devicePixelRatio, accessible through JavaScript and CSS media queries. If you wanted to get the actual width of the screen, you could do this:

window.innerWidth * window.devicePixelRatio

Gross? Gross. But it worked. If you didn’t care about the actual physical size (and mostly, you don’t) you can carry on using these make-believe jumbo pixels. A much better alternative was to abandon pixels altogether and start thinking about things in terms of percentages, or, even better, viewport widths (vw) and viewport heights (vh). There was one problem still left over, though – borders.

Although you measure borders the exact same way you do anything else in CSS, you don’t usually want to. For instance, when I say I want a <div> tag to be 80vw wide, I mean that I want it to take up exactly 8/10ths of the screen width. When I specify a border-width of 1px, what I really mean is “as thin a border as I can get” – it’s just that one pixel is the smallest measurement available.

Except, with device pixel ratios, it isn’t. On a retina iPhone a one pixel border is actually 1 * window.devicePixelRatio, i.e. two pixels, i.e. your borders don’t look really clean and sharp on a retina display. The logical answer is to specify a border-width of 0.5px – but no browser was capable of interpreting that. Until now.

iOS8 saves, but it’s still kind of gross.

iOS8 does understand 0.5px border widths, and it works exactly as you’d imagine. Except it still doesn’t work anywhere else. Even more frustratingly, it’s in CSS limbo. If 0.5 pixels was considered an invalid CSS declaration, you could just do:

The cascading nature of CSS would mean that browsers would ignore the 0.5px value and revert to the 1px one. But browsers do consider it valid – they just round it to zero and you get no border.

Alex Dieulot has come up with a quick hack that tests for 0.5 pixel width support, and applies a CSS class accordingly. If you’re using a CSS preprocessor you can make a pretty simple function that will tidy up your source code, but it still feels ugly, doesn’t it? Here’s hoping we see wider support soon – Chromium has a ticket open for it, at least.

Unfortunately, the saga isn’t over – the iPhone 6 Plus reports a device pixel ratio of 3 (even though that isn’t accurate) and the Android side of the world now has a dizzying array of device pixel ratios – some all the way up to 4. Is it too much to ask for a border-width: as-thin-as-you-can-please; property?

Guaranteeing a CSS transition will actually happen

Let’s say you have a <div> on your page, and you want to slide another <div> on top of it. And, because you’re a good developer, you want to use hardware accelerated CSS to do it. Pretty simple – make a <div> that’s hidden off to the right, then apply CSS that will slide it in. Like so:

Except, when you try the result tab, you’ll see that the new element appears instantly, rather than sliding in. Why? Well, because an appendChild call doesn’t actually add the element there and then – it adds the operation to a queue, to be executed the next time the browser has a DOM repaint event. Which doesn’t happen until after all your JS has run. By the time your <div> is actually added to the page it already has the transform CSS removed.

So, what to do? Wrapping it in a window.requestAnimationFrame seems to work. Most of the time! Unless there’s a lot of other stuff happening!

As the Mozilla documentation states, requestAnimationFrame actually “requests that the browser call a specified function to update an animation before the next repaint”. So it actually does the opposite of what we want – we’re just lucking out that a repaint event has occurred in between anyway. Using a 0ms setTimeout has the same effect. The best chance we have of guaranteeing our slide transition happens is to use a 10 or 20ms delay, and just hope the browser has repainted in the interim.

Gross. Mozilla has a MozAfterPaint event that sound do what we want – except it is only available to addons. So, how do we fix this?

The minimal-ui meta tag in iOS 7.1

Update: iOS8 removed the minimal-ui tag. And no, it didn’t implement the FullScreen API either. Lovely.


 

iOS7 brings us yet another meta tag to use – minimal-ui. You might be able to guess what it does – it hides the majority of the browser chrome when you load, much like when you scroll:

With and without minimal-ui

With and without minimal-ui

Try it out. It’s a welcome improvement, especially after Safari’s disastrous changes with the release of iOS7, and feels like a tacit confession that shutting off the ability to hide the navigation bar (as you could in iOS6) was a mistake.

It also fixes what is easily my least favourite part of the iOS7 Safari UI: stealing tap events that happen at the bottom of the screen to show the bottom navigation bar:

 

With minimal-ui, the only way to restore the bottom navigation bar is to tap the address bar at the top.

But…

This feels a little weird. Users are now going to have different UIs depending on what meta tag the site does or doesn’t have. If users can’t rely on tapping the bottom of the screen to bring up the bottom bar, why have that functionality at all? Surely it would be better to be consistent.

It’s also curious that Apple went to these lengths to create a new meta tag while still not supporting the JavaScript Fullscreen API. Many of the people looking to hide the browser UI are making interactive experiences like games, and being able to go full screen would be even better than minimal-ui – as well as being an actual cross-platform solution.

For now we’ll have to throw the Fullscreen API on the pile marked “Please, Apple. Please“, along with WebRTC and WebGL. But the minimal-ui meta tag is at least a start.

WebRTC DataChannels or: How I Learned To Stop Worrying And Flap My Arms

I had an idea. A stupid idea.

Screenshot 2014-03-07 10.16.50I had discovered that mobile browsers have JavaScript APIs for accessing accelerometer and gyroscope data and wanted to play around with the concept – I hadn’t seen them used in many places. So naturally my thoughts turned to creating a version of Flappy Bird where you paired your phone and computer together, then used the phone as a Wii-like controller to flap your arms.

Now, hear me out – it made sense. Sort of. After the meteoric rise and disappearance of Flappy Bird there were dozens of open-sourced clones out there. And a ‘flapping’ motion seemed like it would be relatively simple to detect. But there was a problem. How do I pair the computer and phone together? My first thought was to use WebSockets – so off I went. One npm install later I had socket.io installed on a server. Pretty quickly I had a system set up where the ‘host’ (that is, the big-screen device) is assigned a random number, and the ‘client’ (phone) prompts the use to enter it. Then it just takes a simple chunk of code to get them talking:

So far, pretty simple. Try it out! …and get very, very frustrated very quickly.

The latency is too damn high

You flap your arms. Anything up a second later the bird might do the same. So, why is it so slow? This is where WebSockets shows its weakness – it still relies on a central server to post messages between clients. And that central server is in Amazon’s data centre in Northern Virginia. Even from my desk in New York, the route there isn’t exactly simple:

doneI dread to think what it’s like in Europe. A better solution was needed. I started googling “peer to peer websockets” and discovered a Stack Overflow question that led me in the direction of WebRTC.

But WebRTC is for webcams

I’d read about WebRTC before, in the context of replacing Flash and Skype for video chats. In my ill-informed way, that’s what I thought the “communication” part in “real time communication” meant. But no – it also has a capability called DataChannels, which are exactly what they sound like: peer to peer data connections. You can do a variety of things with them  like share files or instant message, but let’s focus on the very important goal of making this arm flapping game more responsive.

“Marco”

So, a utopian server-less future? Unfortunately not. WebRTC’s peer to peer communication can do many things – but find other peers is not among them. So we still need a server to match client with server – just like we’re doing with WebSockets.

For this, I turned to PeerJS – a combination of node-powered server and client-side JS library that wraps around WebRTC functions. So, at this point I have both socket.io and PeerJS running on my server, and socket.io and PeerJS client libraries on the client. Feels like a waste. So we should get rid of socket.io now, right?

“Polo”

Wrong. For two reasons:

  1. WebRTC DataChannel browser support is not great. Especially on mobile – Apple deserves a lot of shaming for not supporting WebRTC at all.
  2. Peer to peer connections are tricky. Maybe you’re behind a firewall, or maybe there’s a NAT in the way. With WebRTC you can use a TURN server as a proxy between two clients, but in our case we’re already doing that with WebSockets.

So, we’ll keep WebSockets as a backup. Negotiate the pairing with WebSockets, then ‘elevate’ to WebRTC if both clients are capable and accessible – as demonstrated in this pretty awful set of diagrams:

There’s an added benefit here: the server is only used to create connections. Once that’s done, the clients disconnect, freeing up server resources for other players. If at any point in that process the WebRTC route fails, we just keep WebSockets open and communicate through that.

Our simplified code is only slightly more complex:

So, try it out. I think you’ll prefer it. Faster reactions, fewer server resources used, happier users. Unless they’re using an iPhone, of course.

Postscript: The Source Code

If you want to take a peek under the hood at FlappyArms, all the code is on GitHub. It’s really messy for the time being (being a two-day hack project and all), but I’m still adding features to it, and I hope to get it tidied up along the way.

How many people are using your site offline?

Back in the mists of the mid-to-late 2000s, life was simple. Users had a keyboard and a mouse. They had a screen that was wider than it was tall and usually had a resolution of at least this by that. And they were online. Of course they were – how else would they load your page?

Nowadays nothing is sacred. People browse on their phones, turn them sideways, shake them, swipe, pinch, tap and drop them in the toilet. You have to accommodate it all (well, perhaps not the last one), but nothing is trickier than connectivity. Most of the time users are online. But sometimes they’re offline. I saw a guy on the (underground, offline) subway today, paging through some tabs he’d preloaded onto his iPhone. His experience wasn’t great – he tried loading up a photo from its thumbnail, but the full-size version wasn’t cached offline. He was looking at some related links, but couldn’t tap any of them. He closed the tab and moved onto the next one.

Now, I know what you’re thinking (apart from “wow, you creepily stare at people’s phones on the subway?”) – he should just download an app! Offline storage, background syncing – it’s the ideal solution. Except he wasn’t using one. Maybe it’s ignorance. Maybe he doesn’t like apps. Either way, we’re in the business of catering to users, not telling them that they’re wrong, so it got me thinking – how many people do this? Should I worry about this? I have no idea.

Getting an idea: the OfflineTracker

So, let’s track this. Where to start? Well, the good news is that browsers have implemented some new events on the window object – online and offline. They’re far from perfect. The only truly reliable method is to regularly poll a remote resource to see if you can reach it – like offline.js does. However, firing up a cell data connection every 30 seconds will drain a phone’s battery, and is A Bad Thing. So we’ll make do. I made a quick gist:

https://gist.github.com/alastaircoote/9350466

(in CoffeeScript, see it in JS if you like) with a concept for this tracking. It’s more of a proof of concept than production-ready code, but the general flow is this:

  1. User goes offline. We record the current date and time as the ‘start’ of our track, and store it in localStorage.
  2. User comes back online. We stop tracking, update the end time to be the actual end time, and run the function provided to send this data wherever we want it to go.

Now, there are a few holes in this. So, we also do the following:

  • Update the ‘end of tracking’ time every 30 seconds. In theory we should be able to catch any and all events that would signify the end of tracking, but we can’t (what if the browser crashes? What if the user turns off their phone?). So every 30 seconds we update the ‘end’ of the tracking, and save to localStorage. If all hell breaks loose, our numbers will only be up to 30 seconds off.
  • Hook into the Page Visibility API. This is an event that tells us when the user moves away from our page (usually by changing tabs). This is pretty crucial because it’s going to stop us tracking time when we’re offline and in an inactive tab.
  • Provide a callback to the save function. We’re dealing with bad connectivity – we can’t guarantee that our data will get saved correctly. So we don’t delete our tracking data until the function runs the callback provided.
  • Check on page load for any saved data. So when subway guy views your page offline, closes the tab and moves onto the next thing you can still recapture that data the next time he visits your site.

So, now you have the start and end times of user offline browsing sessions. What you do with it is up to you – maybe only 0.5% of your users access your site this way and you shouldn’t care at all. But maybe your user base consists entirely of cave-dwellers. Either way it would be good to know, right?

The undecided fate of local news

Henry Blogdet bought a newspaper today. He bought it because it contained a story he couldn’t read elsewhere- as he quite rightly states, this whole “write original content and get people to pay for it” concept is an integral part of the future of the news industry. The problem is, how often does that paying part happen? Henry is not going to buy a subscription to the Inquirer and Mirror- he only wanted to read one story. Local newspapers can’t survive on people buying one copy every three months.

IMG_20130727_114036So, a counter example, of sorts. A few years ago I lived in Victoria, British Columbia and would frequently walk past the offices of the Times Colonist newspaper, dreaming of a day when I could work there. So, when I visited the city again recently I was disappointed to find that my former dream desk probably isn’t even owned by the Times Colonist any more- their ongoing difficulties have forced them to lease out half of their office.

Like many struggling media organisations, the Times Colonist has opted to put a paywall on their online content. While there aren’t any official figures to gauge success, my conversations with friends in the city (a demographic that admittedly skews younger than Victoria’s average age) suggest that people aren’t buying. Put simply, they don’t think that the Times Colonist provides $9.95-worth of content each month. And indeed, how can it? Newspapers like the New York Times, Washington Post and the Globe and Mail offer unparalleled coverage from international news desks, recipes, book reviews and much more- for just a few dollars extra.

But they tend not to cover current events on Vancouver Island, Canada. People want and need that coverage. How can local newspapers stay alive? Indulge me while I describe an entirely outlandish and unrealistic concept.

Few local newspapers write their own international (or even national) coverage- for instance, the Times Colonist gets its international stories from the AP and national ones from The Canadian Press. But that’s never really highlighted, because you don’t really want to show your readers that the paywalled content they are reading is available for free elsewhere.

So let’s turn that on its head. Don’t syndicate content that is available for free. Let’s have the paywalled big guys- the New York Times, the National Post, the Washington Post, whoever- make their content available for syndication. Have them host it if you like, but more importantly, keep the branding. Show your readers that they’re getting pulitzer-winning coverage as part of their subscription to the local newspaper. In this model few people outside of the Beltway have a subscription to the Washington Post, and few outside of New York have a subscription to the Times. People in Victoria subscribe to the Times Colonist, with national news presented by The National Post (by cruel twist of fate, the owners of the recently-paywalled National Post used to also own the Times Colonist- you have to imagine it would have been a very straightforward syndication deal if they still did).

The change to the local newspapers would be relatively minimal- they retain their existing relationships with customers and the city that surrounds them. They would have to charge more for subscriptions, but would be providing far more value along with it. The bigger organisations, on the other hand, would be facing sea change. Suddenly the majority of their money would come from business to business transactions, not business to consumer- the impact of such a change can’t be overstated. And while they would still sell a complete newspaper/online offering to local customers, the vast majority of readers would consume content in a much more piecemeal fashion (though I’d argue this is already happening online today).

Doing something like this would involve a large newspaper risking everything by going all-in on an utterly unproven model – and I don’t think we’re at that level yet. And any newcomer faces a chicken and egg scenario- no-one is going to buy your content until you have a brand to back it. But you can’t build a brand without people buying your content.

So this post is really just an idle thought exercise. But the question remains- local newspapers are a vital part of the news industry, how do they reinvent themselves and stay with us?

Charts can say anything you want them to

I just read an article on Business Insider that charted the downfall of MSNBC. It was a fascinating read, but I couldn’t help but get distracted by the charts they used, especially given that the article title was:

The Stunning Downfall Of MSNBC In Five Charts

Powerful stuff. The problem is that the charts are deceptive. For example, the first one:

Screen Shot 2013-06-05 at 12.37.14 AMOof. MSNBC has lost over half it’s audience. Except, wait, it hasn’t. The chart starts at 48,000, and only scales another 18,000. Here’s what that chart should look like:

Every chart on the page is the same (though to a lesser extent) – the only exception was the chart showing Fox News’ huge lead in audience, which correctly starts at zero:

Screen Shot 2013-06-05 at 12.42.08 AMI’m not trying to attribute malice here- as I was putting that example chart together, I was surprised to see that Google Docs does this automatically, so it could well be a simple mistake. But consider this a reminder: always check your axes, otherwise you might end up misleading your readers.

 

Crunching subway data- a New Yorker’s busiest stations

There are many reasons to complain about the subway system here in New York. It’s underfunded, the air conditioning breaks, and if you’ve ever tried relying on the G line you’ve probably ended up with a deep, serious commitment-phobia. But there are many bright spots of the subway system, and as a tech-head developer I’d like to draw your attention to one in particular- data.

The MTA makes a ton of data available. The entire subway and bus system are available as GTFS feeds, allowing you to set up your own instance of OpenTripPlanner for all your subway routing needs- something I used in the aftermath of Sandy to set up an emergency trip planner (and OpenPlans then used to create some great heatmaps). It has data on each and every Metrocard swipe in the city, and, er, Pantone colours for each of the subway lines. It also has a crazy amount of data on each subway turnstile in the city, which is what I’ve been playing around with lately.

The most popular stations in New York are already known- the MTA themselves has them listed. Unsurprisingly, Times Square tops the list by a large margin, with every other large station following closely behind. They don’t share how they came to that number, but I assume that it includes everyone who travels on the subway- commuters, tourists, even the Mariachi guys traveling from train to train. For a heatmap side project I’m currently working on, I wanted to know what the most popular weekday, commuter stations are.

So I downloaded some turnstile data from mid-January to late April of this year to use as my sample set. The format the MTA uses is… weird, to say the least. The basics are there- it doesn’t record every turnstile turn, but rather keeps cumulative totals during the day- 3am, 7am, 12pm, and so on. For some reason it has eight repeating sets of columns that should really be rows, so I threw together a quick and dirty node.js script to flatten these out (and merge all my CSV files into one) and imported the data into a Postgres database.

First off, I needed to get my commuter totals. I did this by creating a view that ran an exceptionally messy SELECT statement which selected the first row of exit data available after or on 3am, then matched it up with the first result after 11am- while excluding all weekend results:

As I said, awful SQL. If anyone has any suggestions for improvements I’d love them. But it worked. For each turnstile, I now have the number of exits taken during the morning rush hour(…ish). Unfortunately, I quickly realised that I’d need to clean the data up- it appears that at certain points, turnstiles just go absolutely haywire and you end up with -200000 exits on one day, which can really mess with your totals. I discovered that I could easily chop this data out just by calculating how far that day’s result deviated from the turnstile’s overall median. The anomalous results were so different that I could set the cut off point at 10x the median and still exclude them.

With that done, it was only a short step to aggregate the data up to the station level, and discover the most popular commuter stations, or, The Stations With The Most Turnstile Exits During Peak-ish Hours on Weekdays:

  1. 14th St Union Square
  2. 42nd St Grand Central
  3. 42nd St Times Square
  4. 34th St Penn Station
  5. Fulton St
  6. 47-50th St Rockefeller Plaza
  7. 34th St Herald Square
  8. 23rd St (6)
  9. Chambers St
  10. 59th St – Columbus

So while many of the results are similar to the overall station popularity, there are some definite differences- Union Square jumping to the top being one of the most noticeable. Be careful not to take too much out of these numbers- as I said, it’s based on a limited dataset of a few months. And I’d welcome any corrections on my working from people smarter than myself!

Customizing your iOS webapp icon- per user

I threw together a little mobile subway-themed webapp hack last weekend, called Subwalkway. I had to make a quick icon for it, and my immediate thought was to make it look like an NYC subway route sign. Luckily for me, the W line was decommissioned a few years ago, so it’s free for me to steal. But what colour to use? Or should I make some hideous beach ball of all of them? No- that’ll just remind me how slow my Macbook is these days. But I got to thinking- maybe I don’t need to choose. The icon is specified with a <link/> tag after all, why don’t I just randomise it? So I did:

Gotta catch 'em all!

The logic is very simple/stupid. On page load, I run a Math.random(), and use that number to choose one of the items in an array of file names. Set the <link rel=”apple-touch-icon-precomposed” /> field, and we’re done.

Obviously my example is a little pointless, but it serves as a proof of concept- iOS will respect whatever icon metadata changes you make after page load. So, there are some more reasonable applications out there- if I expanded the app to Boston, say, I could modify my app icon to better fit the Boston T style. Or if your app works in numerous countries/cities, you could make an Apple Maps-style icon- only with local landmarks.