The Store without Ember Data

Developing with Ember.js, we are used to dealing with the Store service which many of us associate with Ember Data. It was while I was doing some research on how Ember integrates with data layers that I realised that the Store is part of an integration glue that exists in Ember even when Ember Data is not present. Moreover, you can create a light integration with a data layer by simply hooking into Ember's Store interface.

Here's a little guide for you to experiment with the possibilities.

Ember's minimal Store interface

First, to make sure we are not using Ember Data accidentally, let's remove it from our app. Simply remove the appropriate line from your packages.json and rerun npm install (or yarn install):

 1diff --git a/package.json b/package.json
 2index 5ba44f8..c536b45 100644
 3--- a/package.json
 4+++ b/package.json
 5@@ -32,7 +32,6 @@
 6     "ember-cli-shims": "^1.2.0",
 7     "ember-cli-sri": "^2.1.0",
 8     "ember-cli-uglify": "^2.0.0",
 9-    "ember-data": "~3.1.0",
10     "ember-export-application-global": "^2.0.0",
11     "ember-load-initializers": "^1.0.0",
12     "ember-maybe-import-regenerator": "^0.1.6",

So now we know there's no Ember Data. Next, let's say we have declared a route that looks like follows:

1/// app/router.js
2this.route('users', { path: '/users/:user_id' });

As well as this template:

1{{!--- app/templates/users.hbs ---}}
2<h1>User #{{}}: {{}}</h1>

If we do not provide a route module, Ember will provide one for us, using a default implementation. This will see the :user_id parameter last in the path, and from there it will figure out that we are expecting to retrieve a model called user , and its ID will be given in the URL. Roughly it will work like this:

1/// app/routes/users.js
2import Route from '@ember/routing/route';
4export default Route.extend({
5  model(params) {
6    return'user', params.user_id);
7  }

Note that this uses instead of! Why is this? Well, there's a difference between Ember Data's interface and Ember's protocol to interact with a data layer.

Ember can integrate automatically with a data layer in this small way, providing a default route module and figuring out the model name and record id from the path. From here, a clever data layer can provide the appropriate integration hooks, and Ember will use them. In this case the hook is a store service which provides a find method.

But why find and not findRecord? Because Ember and Ember Data are independent. Long time ago it was established that this method would be called find, and initially Ember Data used find instead of findRecord. Eventually Ember Data moved on to a new interface, bringing the methods that we use nowadays. However Ember didn't need to follow suit, and anyway there were already other data layers that were implementing this interface already and there was no point on breaking their integrations. Ember Data does provide a find method which simply translates to findRecord internally, and all works as expected.

The default store

But anyway, as I was saying, somewhere in our app we have the following code:'user', params.user_id);

When Ember Data is not present, Ember provides a default store. This store assumes that we have defined our models (in this case, a user model) in files living at app/models. This would be an example:

 1/// app/models/user.js
 2export default {
 3  find(id) {
 4    return new Promise(function(resolve) {
 5      resolve({
 6        id: 1,
 7        email: "[email protected]",
 8      });
 9    });
10  }

(To be precise, Ember doesn't care about the location and name of this file, or even if there's a file. The important part is that the object above is registered as model:user in the dependency injection container, but that's a story for another day.)

This initial model is not very useful, returning a hard-coded object. This second approach would work with a simple REST API:

1/// app/models/user.js
2export default {
3  find(id) {
4    return fetch(`/users/${id}`);
5  },

This approach is promising, but it has one big issue: we cannot use dependency injection in objects that we instantiate this way. For example, if instead of fetch we wanted to use the ajax service, or if we wanted to grab configuration details from another service, we'd be out of luck.

A custom store

To allow our custom models to play with the injection container, the simplest way might be to do just like Ember Data does, and provide our own store service.

This is an example of a custom store service:

 1/// app/services/store.js
 2import Service, { inject } from '@ember/service';
 3import { pluralize } from 'ember-inflector';
 5export default Service.extend({
 6  ajax: inject(),
 7  find(model, id) {
 8    const resourcePath = pluralize(model);
 9    return this.ajax.request(`/${resourcePath}/${id}`);
10  },

Then, to get this custom store injected by default in our routes, we can register it with the container in an initializer:

1/// app/instance-initializers/setup-store.js
2export function initialize(application) {
3  application.inject('route', 'store', 'service:store');

And voila: your routers, default or otherwise, will be using your custom store.

This is the same approach used by Ember Data which, at the time of writing these lines, does exactly this: it injects its own store into routes, controllers, etc, which is then used transparently by us developers. See addon/setup-container.js on the Ember Data source code to check for yourself.




Pluralization (and singularization)

I found myself yak-shaving pretty deeply into a problem today, wondering why Ember was refusing to pluralize the word "beta" as "betas".

Ultimately I wound up at the list of pluralization rules for ember-inflector. Among them, I found the culprit:

1// ...
2inflect.plural(/(buffal|tomat)o$/i, '\1oes')
3inflect.plural(/([ti])um$/i, '\1a')
4inflect.plural(/([ti])a$/i, '\1a') // <-- This bad fellow here is the problem
5inflect.plural(/sis$/i, "ses")
6inflect.plural(/(?:([^f])fe|([lr])f)$/i, '\1\2ves')
7// ...

According to that rule, any words ending in "-ta" or "-ia" will remain unchanged when pluralized. Initially, I was a bit confused by that, but then I did some looking up to find examples of such words. Many of them are plural Latin words, such as "blastomata", "bacteria", "branchiata", "media", "data", "trivia", etc. I can see how I would not try to re-pluralize them in English.

However, there are many other words that we use in English that are caught in that rule and I would pluralize, such as "phobia", "fashionista" (and "emberista", "pythonista"), "magnolia", "pitta", "paranoia"… and of course "beta" and "delta". So annoying!

Fortunately, the library does offer a way to add your own pluralization rules, so I ended up just doing that for the words I needed.

It's worth noting that this is not just an Ember gotcha. The ember-inflector package is a direct port from the Ruby ActiveSupport::Inflector, with the same default pluralization rules, so you get the same results if you try this in, say, a Ruby on Rails codebase:

1include ActiveSupport::Inflector
2pluralize("beta") # => "beta"

And it's equally solvable by configuring the library:

1inflections do |i|
2  i.plural "beta", "betas"
4pluralize("beta") # => "betas"

If you are asking "what are these plural and inflection things?", I recommend that you read this article by Vaidehi Joshi, of the basecs podcast fame: Inflections Everywhere: Using ActiveSupport Inflector. It explains why our frameworks need to know some grammar, how they go about it, and why I'm not going to get "betas" added as a special case in these default rule sets.


Is a number within a range?

The other day I saw this on a pull request:

1start(appointment) <= now && end(appointment) >= now

I find this a bit jarring to read and avoid it in my code. Instead I prefer something like the following:

1start(appointment) <= now <= end(appointment)

This predicate, closer to the mathematical predicate, reads better to me. It's easier to see that now needs to be in between two limits. This syntax is allowed in some languages such as Python.

But many other languages don't allow this, so I go for the next best thing:

1start(appointment) <= now && now <= end(appointment)

And now… it was only while writing this that I realised: sometimes there is another great alternative, modelled after the following mathematical predicate:

1now ∈ [start(appointment), end(appointment)]

This can be translated directly to some languages too, such as this Ruby example:



Map, then reduce

Added the Appendix section on .

I've been thinking about map and reduce lately. Nothing too erudite though. Something that I have realised is that it's sometimes too easy to conflate the two into just the reduce, when instead it might be clearer to have a very dumb map followed by a very dumb reduce.

Here's a simple example:

1['a', 'b', 'c']
2  .reduce((acc, k) => ({...acc, [k]: []}), {});
3// => { a: [], b: [], c: [] }

It's not obvious at first, but that reduce is doing a the job of a map as well as its own:

  1. Map: it generates new data for each element of the list (the empty arrays []).
  2. Reduce: it aggregates the data into a new piece of data (the resulting object).

These can be separated as follows:

1['a', 'b', 'c']
2  .map(k => [k, []])
3  .reduce((acc, [k,v]) => ({...acc, [k]: v}), {});
4// => { a: [], b: [], c: [] }

This is longer to write, but I think it makes the reduce more readable by turning it into a common idiom, a "pairs to object" reduction if you will. With that in mind you can focus separately on the mapping function and possibly understand better what's being produced.

Functional programming tools can help me explain better. For example, using Ramda I could implement the above as follows:

1const createLists = R.compose(
2  R.fromPairs,
3 => [k, []])
5createLists(['a', 'b', 'c']);
6// => { a: [], b: [], c: [] }

With the use of R.fromPairs in this example, we have turned the "pairs to object" idiom into a single, self-describing function invocation. Now that's easy to read. Compare to the Ramda version of the initial code:

1const createLists = R.reduce((acc, k) => ({...acc, [k]: [] }), []);
2createLists(['a', 'b', 'c']);
3// => { a: [], b: [], c: [] }

I think now the separation between map and reduce becomes more apparent, and so does the benefit of enforcing it.


After I first published this post, I discussed this topic with my friend Rosario. He reminded me that map, filter, forEach, and all those fellows are just special cases of reduce. For example:

1const reduce = (f, init, coll) => coll.reduce(f, init);
2const map = (f, coll) => reduce((a, b) => a.concat(f(b)), [], coll);
3const filter = (f,  coll) => reduce((a, b) => f(b) ? a.concat(b) : a, [], coll);

Thinking about this, I reached another conclusion: don't use reduce unless you must. It's possible that your library has already a function that will implement what you need. Some other special case of reduce, same as Ramda's R.fromPairs filled my reduction needs above. Use that instead. It will be easier to read and understand, both by means of saving you a few braces and brackets as well as by providing a more descriptive name.

And if you don't have such a reducer at hand, create one. A descriptively named function, tailored to your use case. Wrap the reduce in a named piece of code and use that instead of sticking it in a longer chain that may be already difficult enough to follow. Like that long sentence I just wrote.


Ember Data: URIs for singular resources

: updated with some errors fixed. I did a terrible job the first time around.

NOTE: I use the term "URI" here as I consider it the correct one for the context, despite the API using the term "URL". Maybe I'm just being pedantic, dunno…

For info on the difference between URL and URI, check out

Coming from Ruby on Rails and other similar frameworks, I am used to the concept of "singular resources": resources that we can request from the API without providing a specific id, because it is implied in the request. So for example, these resource URIs:

Ember Data doesn't support singular resources by default: you cannot use findRecord or similar to retrieve one of these out of the box. Fortunately, the provided adapters are easy to override to support these singular resources. Let's see how to do this.

First, a refresh: Ember Data ships by default with two adapters, JSONAPIAdapter (the default) and RESTAdapter, which implement two common protocols to communicate with JSON APIs. If your API communicates differently, both adapters provide hooks for you to override for your own particular case.

If you are going to define your own adapters, you should start by defining an ApplicationAdapter. This will be a common adapter for all your models that you can then override for each specific case. It can just be empty. This is just to provide a common ground for adapters specific to each model. Something like this:

1/// app/adapters/application.js
2import JSONAPIAdapter from 'ember-data/adapters/json-api';
4export default JSONAPIAdapter.extend({

This example is actually equivalent to the Ember Data default as it is simply an extension of JSONAPIAdapter. I could have dispensed with the extend call since it's empty, but I like to have it, as I know I'll be adding my custom code. Also, I could have used RESTAdapter instead, if appropriate for my specific API.

Now, let's say that we have an Ember Data model User, and we want to fetch a singular resource of this type from the API. To do this, we'll create a custom adapter so that it retrieves this resource when we do the following:

1const singularUser ='user', {singular: true});

The first thing to note is that I am using queryRecord instead of findRecord. You should use findRecord when you know the id of the resource you are requesting. This allows the store to retrieve this record from cache if appropriate. In this case, we do not know the id, and Ember Data's interface specifies that queryRecord is the correct API to use.

Also, we provide the {singular: true} argument. With this we signal to our custom adapter that we want the singular resource. Note that this is not something that Ember Data understands by default, but instead it is a convention between our custom adapter and its client code. I have seen codebases using other words instead of singular, such as me or current. Which one to use is up to you.

Now that we have clarified how we'll use our custom adapter, we'll build it. We can start by extending our ApplicationAdapter by overriding the buildURL hook. This is a method that the adapters use and is meant to be customised if we need to. It has several parameters, of which the following are useful to us:

With these three parameters, we can tell when our code is requesting a 'queryRecord' for a "singular" resource, for any model type. Here's an implementation:

 1/// app/adapters/application.js
 2import JSONAPIAdapter from 'ember-data/adapters/json-api';
 4export default JSONAPIAdapter.extend({
 5  buildURL(modelName, id, snapshot, requestType, query) {
 6    if (requestType === 'queryRecord' && query && query.singular) {
 7      return '/' + modelName;
 8    } else {
 9      return this._super(...arguments);
10    }
11  }

So when the client code is requesting a singular resource, return the URI for it. Otherwise, just call this._super and do as the adapter would normally do. Normally, this will return three types of URIs, depending on the request. For user resources, they will be these:

If you try the code, it should work now. There will be an interesting hitch though: your requests to the singular resource will also include the query parameter ?singular=true. This is the default behaviour of queryRecord: sending the keys and values of the second argument as query params. In most cases, this is probably ok with your API as it will just ignore it. However, if you would rather not see that unseemly addition to the request, you can just do some more adapter overriding.

To do this, I can think of a couple of methods that could be overriden. My preference is to go for sortQueryParams, which is normally used to change the order of the query parameters (which is rarely necessary). We'd be cheating a bit here, because this method is intended to change the order and we are instead modifying the query altogether, by removing a key/value pair. I think it's still acceptable.

Having said that, this is a possible implementation:

 1/// app/adapters/application.js
 2import JSONAPIAdapter from 'ember-data/adapters/json-api';
 4export default JSONAPIAdapter.extend({
 5  // ...
 7  sortQueryParams(query) {
 8    let newQuery = Object.assign({}, query);
 9    delete newQuery.singular;
10    return newQuery;
11  },

I'm using Object.assign to make a copy of the original simply because I favour the "immutable" style of programming. I could just delete the key from the input argument and return it again. Do it the way it you like best.

Anyway, that's it. The adapters and serializers that Ember Data bundles by default have an extensive number of hooks that you can take advantage of, and their documentation has improved a lot over time. I recommend that you have a look at it.

Still, I actually learned all this by reading Ember Data's source code, before these hooks were so well documented. It's very easy to read, and a few console.log calls in the right places will show you what's actually going on when you interact with the library. Go try yourself, it's a better way to learn. If you find something that is not well documented, that's your chance to contribute ;-)



I don't have a Ramanujan at hand, so I'm going to have to leave it at that…


Keep it simple (express-session vs cookie-session)

If you are starting out a new webapp using Express on Node.js, do not use express-session unless you really know what you are doing. Like: really know what you are doing and why. In any other case, use cookie-session.

Original image by Robbgodshaw

CC BY-SA 3.0 License

The paragraph above may sound disparaging, but that's not where I'm going here. There are perfectly good use cases for express-session. However it requires more setting up than it may appear at first: you will have to integrate it with a database or similar. It may appear to work as a drop-in at first, but this is only because it defaults to using a memory store that won't work on production.

I bring this up because, recently, I was helping out a person who was learning the ropes of web development and was using Express. Sessions were not working correctly, expiring at random. Eventually I realised that they were using express-session, thinking that it would just work after adding the package. Dropping it in favour of cookie-session solved the issue.

Keep it simple. If you are just going through the first few iterations of your new project, working towards an MVP, there's a lot of stuff you do not need. It is tempting to see packages such as express-session, which are very popular on Github and are very flexible, and you may think that your project should use just that. However you should be wary of adding anything that has more options than you need, and could add complexity that will slow you down in the end.


Third party files on a custom Debian Live installer

Back in early 2016, I ran into a problem while customising a Debian Live CD that I maintain. I wanted it to include a piece of third-party, proprietary freeware, in a way that didn't break the terms of the license, while keeping the process automated. Ultimately, I found a solution that, while slightly overcomplicated, did the job pretty nicely. I documented it at on an article titled Large files on a custom Debian installer.

Earlier this year I revisited this problem, and I could find a much simpler solution that involves fewer moving parts. This is the summary:

  1. Create a directory config/includes.chroot/tmp and put there the files you require.
  2. Make it so that your VCS ignores this directory (eg: add it to .gitignore if you use Git).
  3. Write a config hook that accesses these files at /live-build/config/includes.chroot/tmp, running installers, copying them to the right location, or generally whatever it is you need to do with them.

And that's pretty much it. For a bonus, you could have a script that runs as part of the build process and downloads the files for you. That would help others who use your scripts later (or yourself, when you forget the details a few weeks down the line).

For an example of this, check out the pull request I created in my project. To run the buid I use a Makefile, where I added a dependency to show a message with instructions when the third-party files are not in place:


Rebooting machines with Ansible

There are a few resources online explaining how to reboot a machine using Ansible which didn’t work for me. My task would always time out and I had no idea why. Finally I figured it out.

The tasks I was using looked roughly like these:

 2- name: Restart machine
 3  shell: shutdown -r now "Maintenance restart"
 4  async: 0
 5  poll: 0
 7- name: Wait for server to come back
 8  local_action:
 9    module: wait_for
10    host: {{ inventory_hostname }}
11    state: started
12  become: false

The problem here is the use of inventory_hostname. In my inventory, I was referring to my machines by the name they had on my .ssh/config. This works well when invoking Ansible, whose CLI integrates well with OpenSSH. However it doesn’t work for modules, or at least it doesn’t for wait_for which I use above.

After trying some alternatives, I eventually settled for having all the network information on my inventory. This is, declaring ansible_host (and possibly ansible_port) for each entry, instead of relying on .ssh/config. Then I would use ansible_host in the wait_for task to indicate the host.

After some additional tweaking, currently I have a reboot role whose main task looks like this:

 2- name: Restart machine
 3  shell: sleep 2 && shutdown -r now "Maintenance restart"
 4  async: 1
 5  poll: 0
 6  ignore_errors: true
 8- pause:
 9    seconds: 5
11- name: Waiting for server to come back
12  local_action:
13    module: wait_for
14    host: '{{ ansible_host }}'
15    port: '{{ ansible_port }}'
16    state: started
17    delay: 10
18    timeout: 60
19  become: false # as this is a local operation

Why sleep 2, async: 1 and poll: 0? I have no idea. I have tried a few things and this is the one that appears to work reliably for me. For now, I’m sticking with it, until I understand all this a bit better.


Timetable for (most) any concert in London

I find it funny, how concerts in London tend to run like clockwork. So much so that they rarely stray from this schedule:

19:00 Second support starts
19:30 Second support ends
19:45 Main support starts
20:30 Main support ends
21:00 Headliner starts
22:30 Headliner ends

Doors normally open at 7, so it's not unusual that you have missed most of the first act by the time you enter the venue. Fortunately, some venues do open earlier (6pm) if there's going to be a second support.

In any case: super handy to plan around.


WARNING: server ‘gpg-agent’ is older than us

I ran into the following error this morning, using the Pass utility:

 1$ pass -c foo/bar
 2gpg: starting migration from earlier GnuPG versions
 3gpg: WARNING: server 'gpg-agent' is older than us (2.0.30 < 2.1.21)
 4gpg: error: GnuPG agent version "2.0.30" is too old.
 5gpg: Please make sure that a recent gpg-agent is running.
 6gpg: (restarting the user session may achieve this.)
 7gpg: migration aborted
 8gpg: decryption failed: No secret key
 9There is no password to put on the clipboard at line 1.

From what I’ve seen online, you can also see this problem using GPG by means other than through Pass. In any case, I couldn’t find a fix, and I was starting to be worried I couldn’t access my passwords any more, or at least not easily.

Fortunately, I could figure out what was going on by reading the error closely. Looks like there was a version of gpg-agent running that was different from the one expected by GnuPG. Also, some migration was expected to occur between the old and the new version. Therefore I had to shut down the old version so that this migration could take place:

1$ gpgconf --kill gpg-agent

Then I ran my command again. This time the underlying gpg command runs its migration successfully and then allows Pass to resume as if nothing had happened:

1$ pass -c foo/bar
2gpg: starting migration from earlier GnuPG versions
3gpg: porting secret keys from '/Users/pablobm/.gnupg/secring.gpg' to gpg-agent
4gpg: migration succeeded
5Copied foo/bar to clipboard. Will clear in 45 seconds.


Detect if a number has decimals

You have a float variable, and you want a quick boolean expression to tell if the value has any decimals. To put it differently, to tell whether a float variable is currently holding an integer value or not.

Divide by 1, check if the remainder is 0:

1number % 1 == 0 # => true for integer, false for rational

Which makes perfect sense if you think of it. If you divide 9.5 apples among 5 children, the remainder is how many you couldn't split: 4.5. If you divide the apples among 1 children, in a strict sense there's still 0.5 you can't "split". The versions with negative operands follow from there.


Messing up at work

Many years ago, I wrote (bad) code that sent a single marketing email, repeatedly, to addresses in a subscribers list. Imagine your inbox filling up with copies of the same email because some idiot got their code wrong. On the bright side, I realised of what I had done quickly enough that most recipients only received 4 copies of the email (I think). At the time, I had 3 years of experience as a backend web developer.

How did you even manage that?

These were the ingredients:

So there you are. Email sending job runs, fails halfway through the list, tries again from the very beginning. As a result, all addresses that got an email will get another one. Neat (not).

What should have happened instead (technically)

Some techniques that would avoided this spring to mind:

And of course, write specs/tests to make sure things are working the way you expect. All pretty reasonable, really.

Whose fault was it?

It’s natural to feel bad when you mess up building software. However, software engineering is fraught with difficulty and cannot be one person’s job. When this happened, I had written all code myself as I was pretty much the Software Development Department at that job. There were no standup meetings, pair programming or code review because there wasn’t anyone I could have them with. I was told what was needed, and I implemented and deployed it. This made me a single point of failure.

When individuals become a single point of failure, the mistake has already been made. Humans are not perfect and will make mistakes.


Fortunately for me, my line manager reacted pretty well and was understanding. Other people, directly affected by this, were less impressed, but I didn’t need to worry about that too much.

Now, it’s easy to invoke the ghost of Imposter’s Syndrome, and say that we needen’t worry about our ability and should simply keep going. It worked out for me at the time, but I wonder what could have happened, or what has and will still happen to other people in similar circumnstances but in a less favourable environment.

Also: a moment to check my privilege. I’m a white male. Consider how many of my lot out there may have screwed up in a similar fashion with no repercussions. Consider how many in a different demographic may have been penalised after a similar event, because of a bias unconscious or otherwise.

What this can learn us

It’s not your fault

If you get in trouble for something like this, start looking for a new job: that environment is not conducive to your growing up as a professional or a person. Having said that, I understand this is easier said than done. Not everyone enjoys circumstances where making this jump is comfortable, or even possible, regardless of their ability.

You might beat yourself up about it. Don’t. Share it with other people, both your loved ones and peers in the industry (who may be one and the same!). A local tech meetup can be a good place to exchange experiences and find that other people also have their own botch-up stories.

Find out how others would deal with this

Experiences like this make for a good interview question, for both sides of the conversation:

Fix the process, not the people

If something like this happens in your watch, ask yourself: how did the development process fail? What should be changed to avoid it repeating? What can the team learn from this experience?

In closing

You could read this as “There ain’t no such thing as individual failure in a software development team”. TANSTAIFIASDT. Catchy!


Your VPN can be an attack vector

If you use a VPN, be aware that it may not be filtering inbound ports. Effectively, this opens up your computer to port scanning and attacks on vulnerable network services. In this scenario, it doesn’t matter if you are behind a NAT: the VPN virtually grants you a public IP address reachable by all.

A few months ago I had a simple HTTP server open on port 8000 of one of my machines. At some point, I noticed that my logs listed requests from machines outside my network. These are some examples: - - [21/Oct/2016 22:08:12] code 400, message Bad request syntax ('\xca\xe6f\x89\xc4\xa8\xbc\xc6\x8d^\x9b\x14\xa1X\xb3x\xa3\xf9o`9\x0c\xd6\xdcY_\xee\x1d\xec4\xe9\x8d4\xa5\xb7\x98{6\xb5\x18\xe0J\xee\x1d\xfcFWy\x1650\xa4H\x10\xe8\xb0\xa0\xc7RS \xd1\x1b\xe6\xbf2[\xa8\xb1\x9c$\xc5&4\xf4\x7f\x06\xa8x\xf0K\x17\xaf\xdbe\xf3M\xa9\xd5\x7f~\x9f_ \x0c\x92\r\xd5`\x97D"y\xb5\xf6"\x1f\x13:\t\x0b\x05*\xee\x0f\xd2\xab\xdf\xeb0\xa4\xa41\xf2\x9d\xdb%I\xbd\x8bh\x19\xf0M\xc0\x1b\xf5\x86E\x9eF\xcc\xed\xce1\xaa%"D\'\xf4\xad\xee\xc3\r\x8f\xa0\xb1\xe0Ji8\x0b\xf6\x999[71\xc0\xbf\xc4\xc0\xc4\xee\x9b\x8c\xae\x8bH3\xd1*\xa6T\x18\xd26NK\x8e\x94\xcc_\x95\xc9.\xfd\xa87\xe3\x1a\xb6\xed\x8b\xf0A\x83N\x0f\x1e?\t\xcd\x15\x08\x0bJ\x99\xd4\xfa\xbb\x18\xbc\x7f\x0fW\xccy\xdfG\xb6\x03\x03\x96\x8e\xcd\xab\xb0v2\xa3\x0f\xd9*q>\t\t\xb0\xac\xf3\x07\x80\x13E&\xa6\t')
323.125.107.154 - - [22/Oct/2016 20:37:09] code 404, message File not found
423.125.107.154 - - [22/Oct/2016 20:37:09] "GET / HTTP/1.1" 404 -
6123.151.42.61 - - [23/Oct/2016 10:26:37] "GET HTTP/1.1" 404 -
8218.93.206.27 - - [23/Oct/2016 16:41:20] code 400, message Bad request version ('0\xf6\xdb\x00\xbd\x00\x00p\xc00\xc0,\xc02\xc0.\xc0/\xc0+\xc01\xc0-\x00\xa3\x00\x9f\x00\xa2\x00\x9e\xc0(\xc0$\xc0\x14\xc0')

There were more than those, most of them looking very much like HTTP vulnerability probes of all sorts. All these requests had me baffled for a while. I was convinced that my network had been breached somehow; maybe a router misconfiguration. Eventually I realised that the source of these requests was my VPN.

I use IPredator often. They provide a pretty reliable service I’m happy with. However there are details like this one that are not that obvious and can bring new trouble that you didn’t expect. Security is annoyingly difficult to get right!


A simple asset pipeline with Broccoli.js

Updated on to support Babel 6.

I've been doing some research on how to set up an asset pipeline using Broccoli, which is part of the toolset provided by Ember.js. The official website shows a good example of use, but I wanted to do something a bit more advanced. Here's the result.

At the end of this text, we'll have an asset pipeline able to read these inputs:

And generate these outputs:

I will be using Yarn instead of NPM, because it will create fewer headaches down the road. Also, it's 2017, happy new year!

Basic setup

Broccoli works as a series of filters that can be applied to directory trees. The pipeline is defined on a file named Brocfile.js, which at its minimum expression would look something like this:

1module.exports = 'src/html';

A "Brocfile" is expected to export a Broccoli "node", which is a sequence of transforms over a directory tree. The simplest possible example would be just a string representing a filesystem path, so the above does the job. We could read it as "the output of this build is a copy of the contents of the src/html directory".

Note that I say Broccoli "nodes". There's a lot of literature out there referring to Broccoli nodes as Broccoli "trees". It's the same thing, but "node" seems to be the currently accepted nomenclature, while "tree" is deprecated.

Running a build

We have a very simple Brocfile. Let's run it and see its result. We need the Broccoli CLI and libraries for this, so let's first create a Node project, then add the required dependencies:

1$ yarn init -y
2$ yarn add broccoli broccoli-cli

Then we add the following entry to our package.json:

1"scripts": {
2  "build": "rm -rf dist/ && broccoli build dist"

Now we can run the build process any time with this command:

1$ yarn run build

When we run this command, we run a Broccoli build. Since we are not doing much at the moment, it will simply copy the contents of the src/html directory into dist. If dist exists already, our build script deletes it first, as Broccoli would refuse to write into an existing one.

Did you get an error? No problem, that's probably because you didn't have a src/html directory to read from. Create one and put some files on it. Then you'll be able to confirm that the build process is doing what it is expected to do.

NOTE: working with Node/NPM, it's common to see examples that install a CLI tool (broccoli-cli in this case) globally using npm install -g PACKAGE_NAME. Here we avoid this by installing it locally to the project and then specifying a command that uses it in the scripts section of package.json. These commands are aware of CLI tools in our local node_modules, allowing us to keep eveything tidier, and locking the package version of the CLI tool along with those of other packages.

Using plugins

Most transforms we can think of will be possible using Broccoli plugins. These are modules published on NPM that allow us to transpile code, generate checksums, concatenate files, and generally do all the sort of things we need to produce production-grade code.

Now, in the first example above we referred to a Broccoli node using the string src/html, meant to represent the contents of the directory of the same name. While this will work, using a string this way is now discouraged. Current advice is to instead use broccoli-source, which is the first of the plugins that we will use in this walkthrough. Let's install it:

1$ yarn add broccoli-source

Now we can require it into our Brocfile and use it. I'm going to use variables in this example to start giving this pipeline some structure:

1var source = require('broccoli-source');
2var WatchedDir = source.WatchedDir;
3var inputHtml = new WatchedDir('src/html');
4var outputHtml = inputHtml;
6module.exports = outputHtml;

If we run the build, we'll get exactly the same result as before. We needed more code to get the same thing, but this prepares us for things to come, and follows best practices.

The development server

In the previous example, we referred to the input HTML as a WatchedDir. This suggests that, similarly to other build tools, Broccoli includes a development server that will "watch" the input files, running a build automatically when we save any changes. Let's create a command for this on our packages.json file, adding a new entry to the scripts section:

1"scripts": {
2  "build": "rm -rf dist/ && broccoli build dist",
3  "serve": "broccoli serve"

Now we can start the development server with:

1$ yarn run serve

Assuming you have a file called index.html in your src/html directory, you should see it at the URL http://localhost:4200. If the file changes, you can simply refresh the page and the changes will appear without you having to explicitly run the build.

Adding a CSS pre-processor

So far this isn't very exciting. The development server is just showing copies of the HTML files in our project. Let's add a proper transform.

For this we can use a CSS pre-processor. For example, we can install the Sass plugin:

1yarn add broccoli-sass

Require it at the start of our Brocfile:

1var sass = require('broccoli-sass');

And add it to our pipeline on the same file:

1var inputStyles = new WatchedDir('src/styles');
2var outputCss = sass([inputStyles], 'index.scss', 'index.css', {});

This example will:

There's a problem now. We have an HTML pipeline and a SaSS pipeline. We have to merge the two into a single result.

Merging Broccoli nodes

When you have several sources of code, to be treated in different ways, you get separate Broccoli nodes. Let's merge the ones we have into a single one. Of course for this we need a new plugin:

1$ yarn add broccoli-merge-trees

Now we can perform the merge and export the result:

1var MergeTrees = require('broccoli-merge-trees');
3// ...process nodes...
5module.exports = new MergeTrees(
6  outputCss,
7  outputHtml,

Now ensure that your HTML points to the produced CSS, which in the above example we have called index.css. Reload the develpment server and check the results.

From modern JS to one that browsers understand

All that was quite easy. Dealing with JavaScript took some more figuring out for me, but eventually I got there. Here's my take on it.

We are going to transform some ES6 files into a more browser-friendly flavour of JavaScript. For this, we need Babel, and there's a Broccoli plugin that provides it for us. We start by installing the appropriate package, as well as as a Babel plugin that provides the transform we need:

1$ yarn add broccoli-babel-transpiler babel-preset-env

And now we alter our Brocfile.js to look like this:

 1var babelTranspiler = require('broccoli-babel-transpiler');
 3// ...etc...
 6  presets: [
 7    ['env', {
 8      targets: {
 9        browsers: ['last 2 versions'],
10      },
11    }],
12  ],
15var inputJs = new WatchedDir('src/js');
16var outputJs = babelTranspiler(inputJs, BABEL_OPTIONS);
18// ...etc...
20module.exports = new MergeTrees(
21  outputCss,
22  outputHtml,
23  outputJs,

The BABEL_OPTIONS argument can be used to tell Babel what platforms its output should target. In this case, we specify that we want code compatible with the last 2 versions of current browsers. You can find the list of supported browsers at

Write some JavaScript that uses modern features of the language, and put it in src/js, then check the results. Remember to restart the dev server and reference the JS files from your HTML. The output will consist of files of the same name as those in the input, but converted to JavaScript compatible with current browsers.

NOTE: in previous versions of this guide, we didn't need BABEL_OPTIONS, as Babel's default behaviour was good enough for us. Since version 6 of Babel, we need to be more explicit at to what exactly we want, and this new argument is now required.

Local JavaScript modules

The one thing Babel is not doing there is handling module imports. If your project is split into several modules, and you use import in them, these lines will have been transpiled into require lines but these won't actually work on a browser. Browsers can't handle JavaScript modules natively, so we will need a new step that will concatenate all files into a single one, while respecting these module dependencies.

I have figured out a couple of ways of doing this, so I'll explain the one I like best. First we are goint to need a new Broccoli plugin:

1$ yarn add broccoli-watchify

Watchify is a wrapper around Browserify. In turn, Browserify reads JavaScript inputs, parses them, finds any require calls, and concatenates all dependencies into larger files as necessary.

Let's update the lines of our Brocfile that dealt with JS to look as follows:

1var babelTranspiler = require('broccoli-babel-transpiler');
2var watchify        = require('broccoli-watchify');
4// ...
6var inputJs = new WatchedDir('src/js');
7var transpiledJs = babelTranspiler(inputJs, BABEL_OPTIONS);
8var outputJs = watchify(transpiledJs);

The watchify transform assumes that you will have a file index.js that is the entry point of your JavaScript code. This will be its starting point when figuring out all dependencies across modules. The final product, a single JavaScript file with all required dependencies concatenated, will be produced with the name browserify.js.

Note that imports are expected to use relative paths by default. This is, the following won't work as it uses an absolute path:

1import utils from 'utils';

But this will (assuming the module utils lives in the same directory as the one doing the import):

1import utils from './utils';

That is the default behaviour. If you use different settings, you can pass some options in. For example, say that you want Browserify to:

To achieve this, you invoke it with these options:

1var outputJs = watchify(transpiledJs, {
2  browserify: {
3    entries: ['app.js']
4  },
5  outputFile: 'index.js',

Using modules from NPM

The best thing about Browserify though, is that it can pull NPM modules into your project. For example, say you want to use jQuery. First you have to fetch it from NPM:

1$ yarn add jquery

Then you would import it in a module in your own code:

1import $ from 'jquery';
3// ...

And finally you tell the Watchify plugin where it can find it, passing an option pointing to your local node_modules as a valid place to pull modules from:

1var outputJs = watchify(transpiledTree, {
2  browserify: {
3    entries: ['index.js'],
4    paths: [__dirname + '/node_modules'],
5  },

In this example, jQuery will be pulled into the final file, where your code can use it freely.

NOTE: even though by default it expects index.js as entry file, I have noticed sometimes watchify (or browserify, or the plugin, or something), doesn't work correctly if we pass options and don't specify the entries value. Therefore, I recommend always including it.

A complete example

I have a GitHub repo that I'm using to experiment with build tools. At the time of writing, there are two working Broccoli examples that you can check out. I may add others in the future, as well as examples with other tools.

Check it out at pablobm/build-tools-research. I hope you find it useful.




I choose you, Ember.js

There are too many JavaScript frameworks out there, and I am not a JavaScript expert. I have neither the time nor the inclination to put all them to the test and come up with rational, experience-based arguments as to which one is superior, be it in general or for a specific task. Similarly, if I look for opinions of experts, I am always going to find articles praising one over the others, in all sorts of situations and contexts. Therefore, when the time came for me to choose one of them to use, I had to rely on different metrics.

My choice was Ember.js, and these are the reasons for my decision.

A community effort

Ember is a fully community-directed effort. Several companies, large and small, with competing interests are involved in its development. This is in contrast with React.js, developed mainly by Facebook, or Angular.js, developed mainly by Google.

I do have a small, yet significant concern that these frameworks will make progress in accordance with the desires of their primary backers, rather than those of the community. In other words: this opens the door for behaviour, features, or simply general direction that benefit the use cases of a single company, against those of their users outside of it. This is a concern I do not have with Ember.

(There’s also the matter of whether React’s license is open source or not, good or bad, but I’m not going to go in there).

The development process

In a related point, Ember’s development process is extraordinarily open and deliberate. Every substantial change requires going through an RFC process where the initiator must not only present their case, but also list possible drawbacks and alternatives. This greatly increases the chances of all possible use cases being considered for the benefit of all users.

Convention over Configuration

Ember follows the principle of Convention over Configuration. Some time ago, I tried out Angular.js, and one of the sensations I had was that I didn’t know where stuff went; how to properly organise a project. As a long-time user of Ruby on Rails, I appreciate a framework being opinionated. It removes from my mind these small worries, while letting me focus on how to work on the actual problems that the project was created to solve. It enables new members of the project to quickly understand it, as the structure will be similar to that of other projects with the same framework. It provides tested, stable solutions for common problems, supported by established best practices in the community.

I have met people who felt otherwise, and wanted more control over small and purely technical details of the software they were building. I think that is not control, but an illusion of it. At the end of the day, you will end up having to build the same foundations that an opinionated framework would have provided in the first place: you are bound to reinvent the wheel… poorly.

Stability without Stagnation

Ember commits to provide Stability without Stagnation. The JS community is too used to having to reinvent itself every year, throwing away yesterday’s tools and rebuilding everything at the whim of today’s new fad. Ember promises to offer clearly-defined periods of backwards compatibility combined with clear upgrade paths. In the words of co-creator Tom Dale:

The Ember community works hard to introduce new ideas with an eye towards migration. We call this “stability without stagnation”, and it’s one of the cornerstones of the Ember philosophy.

An example of this is the introduction of LTS (Long Term Support) releases. Development teams have better things to do than upgrading their framework every six weeks (the length of Ember’s release cycle). To avoid this, LTS releases allow teams to stick to versions for longer, while still getting support (ie: bugs being addressed) and increased attention to upgrade paths.

Not everything is rosy

Before order, there was chaos

Before Stability without Stagnation, there was a fair amount of confusion. Not sure when this changed, but at some point before version 1.13 order was restored, or rather formally instituted for the first time. Until then, changes were happening too rapidly and without that much fanfare. Each new version could bring a breaking change that would affect your app. It affected me a bit, but I was lucky to arrive towards the end of this previous era, and with small apps that were relatively simple to upgrade, so I made it through unscathed.

Lessons were learned and good overcame evil, but some of the fallout remains. One manifestation of this is that I must filter Ember-related searches to content from 2015 onwards. Anything older than that is very likely to refer to abandoned practices and interfaces. Remember to do this when you search for resources.

Documentation can improve

Ember’s official documentation wasn’t always up to scratch. Fortunately, this has also been addressed with the creation of the Learning sub-team, tasked not only with improving the documentation, but also generally make Ember easier for users old and new.

There’s still some way to go here though. I regularly have conversations with Ember users where we agree that certain idioms are not well explained, and sometimes there’s confusion as to the best way to implement certain common patterns, or make known parts of Ember work together. Still, I have seen great improvement in the last year, and I have high hopes in this venue.

Ember is large

As built on my laptop right now (using Ember 2.9.1), the framework portion of Ember.js weighs 655.62 KB (175.49 KB gzipped). that’s a big download for the JS of your website, and about 3x-4x the size of React’s according to comparisons online.

In its defence, Ember does more than React, and when using the latter you’d need to incorporate a large number of third-party packages in order to get the functionality that you’d normally get by default with Ember. And sure, you may not need some pieces, but it does help your process if you don’t have to think about which plugin to choose when you are faced with a decision on how to fill a functionality gap that Ember would have already provided.

Not only that: Ember is en route to provide a solution for this problem too. It won’t be tomorrow, but it will arrive. There is an active effort, formalised in the form of several RFCs, to reduce the weight of your Ember builds. One such feature will be present in Ember 2.10 (slated for November 2016): lazy engines. This is the ability to split your application into smaller applications that will only be loaded if the user actually visits them, all with minimal configuration on the developer’s part. More is expected to come, such as dead code elimination or removal of old APIs (within the constraints of Stability without Stagnation).

Drinking the Kool-Aid

This is the Kool-Aid I have decided to drink. From the information available to me, it does appear to be the best, most sensible choice. I am hopeful it will be. Of course I can be wrong, but I have to work with what I have.


Palm-mute on GarageBand

Playing around with GarageBand recently, I had a metal guitar track (a software instrument, not recorded), and I wondered how to make it sound like it was palm muted.

There are many tutorials on YouTube on many GarageBand-related topics, but I couldn’t find anything on this. Finally, I stumbled across it mostly by accident, so here it is for anyone who may be having the same problem.

In short: use the “modulation” function, bringing it to a high value.

For a quick example, create a guitar track and bring up the “musical typing” tool (Cmd+K). Press 8 to set the highest level of “Modulation” and then press any note. You should hear the muffled guitar sound characteristic of palm muting. If you now press 3 to bring the “Modulation” setting back to normal and try again with a note, the string will ring unmuted.

Musical typing tool, modulation set at highest value and pressing a low E note

Musical typing tool, modulation set at highest value and pressing a low E note

To use this in a track:

  1. Open the editor/piano roll (press E)
  2. Expand the MIDI draw (icon with three lines on the top left corner of the editor)
  3. Under “Controller”, select “Modulation” on the dropdown (I think it’s the default option anyway).
  4. And now the part that requires some patience: adjust the modulation value for the track so that it’s near the top when you want to apply palm muting, and lower when you don’t.
Modulation adjusted at different values as the track progresses

Modulation adjusted at different values as the track progresses

What does this have to do with modulation as a sound concept? Beats me. I guess GarageBand uses the function for other, misc uses where the original sense doesn’t work. I dunno, I don’t really know anything about sound engineering.


The “p” sound in English

For some time, I had noticed some people taking my name and spelling it as “Bablo”. This was more common with, say, couriers than with cafe baristas, probably because the former tend to be from countries where English is a first language, whereas the latter tend to be from Southern European countries. Or at least that’s my perception, in my specific bubble in London. I could be wrong. I would love to see some figures on that.

For a time, I made an effort to emphasise that misheard “p”, but people kept getting it wrong. I would increase my degree of emphasis, and ultimately I would end up sounding really silly… and still getting my name misspelled by couriers.

It was only the other day, playing with the voice control of a 4th generation Apple TV that it dawned on me, what I had been doing wrong all this time. I was spelling out some login credentials, letter after letter, and Siri refused to acknowledge a “p”, insisting that I was giving her (it?) a “b”, no matter how much force I put on my labial stop. And the person next to me, an Irish national, agreed with Siri. Finally I had an opportunity to unravel this mystery.

Long story short: the Spanish “p” is different from the English “p”. It took me 12 years in the UK to realise.

Wikipedia starts providing a clue, then confuses the reader, then a proper read shows the difference. The page for Spanish phonology describes the “p” as “labial stop” as expected (by myself anyway), while the English phonology page has it as “labial plosive/affricate fortis”. Promising, whatever that meant… Or it appears to be until both links (click on the “p” phoneme on the table) lead to the same page: Voiceless bilabial stop. W00t?

It is in the examples section of the page that the Spanish phoneme is described as a “plain p” or /p/, whereas the Engish one appears as “aspirated p” or /ph/. And that was exactly what I learned spelling out letters to Siri with the assistance of an English-speaking human. I had to pronounce it in a way that, to me, sounded more like a “pf”, with a very light “f”, for Siri (and the human) to accept. Not only that, further down the password there was also a “t” that Siri would get as “d”, and the problem was pretty much the same: I had to pronounce a bit more like “tch”.

You see, I speak English well. I learn it from a book.


Up-to-date Firefox on Debian Live

The computers where I install Debian Live are used to access a relatively limited set of websites. The default browser on the builds is Iceweasel, Debian’s rebranding of Firefox which came to be circa 2004-2006 due to esoteric legal reasons. The version number is 38, which is an ESR: a long-term support version scheduled by Mozilla to receive security updates for a extended period of time.

Now the problem: one of the websites we use recently started complaining that our browser was too old. Well, it actually is old. Firefox/Iceweasel 38 was first released in May 2015, and a new ESR cycle (version 45) has started since then. Websites maintainers tend to recognise this, and this website appears to require the latest ESR. Everything appears to work correctly, but it does show a warning and I am concerned it could actually start breaking at any moment.

So what’s a Debian Live admin to do? I found that Debian do publish up-to-date browser packages (or as up-to-date as the latest ESR anyway) in their repositories. However, it’s not in the obvious, default repositories but in the ones for security updates, which need to be added to my sources.list. These are normally active by default in new installs of Debian, but for some reason they are not on Debian Live builds.

But also to my surprise, while looking for a solution I found that the whole Iceweasel/Firefox drama has finally come to an end. Things have changed over the years, and finally Mozilla has changed its licensing policies, getting Debian to finally agree to bundle Firefox, actually branded as Firefox, in its distribution. These new packages are already available, also on the repositories for security updates.

This means that I can have Firefox instead of Iceweasel, and it can be version 45, which fits my requirements.

To have my Debian Live ISOs bundle this updated Firefox, I have to add the source line at a file in config/archives/. This is a valid example:

1# config/archives/security.list.chroot
2deb jessie/updates main

If you were including Iceweasel in your builds, that should get you version 45. Now, if you want Firefox you can reference it (as firefox-esr) instead in your package lists. For instance:

1# config/package-lists/firefox.list.chroot

The rest is just running a build and installing the result in your computers.


Upgrading to Ember 2.x

As part of my effort to learn Ember.js, I built a relatively simple app that used the TFL API to fetch bus arrival times at nearby stops. It can also do tube, DLR and overground, and eventually I may include other means of transport. It’s called London Waits.

All this was built in Ember 1.13, and only now I decided to upgrade it to Ember 2. It wasn’t trivial, but in the process I gathered some notes that may help others.

Ember 2.0: initial upgrades

I started by upgrading Ember proper to 2.0.x. This can be done with Bower:

1$ bower install ember#2.0.x --save

There was immediately a problem. When I run the app, it complained that it required a more recent version of jQuery:

1Error: Assertion Failed: Ember Views require jQuery between 1.7 and 2.1

I did as told:

1$ bower install jquery#2.1.x --save

This got my tests passing again, so I proceeded to upgrade Ember Data. Now, before version 2.3, Ember Data had both an npm module and a Bower component, and both were required. I’m not entirely sure, but I think the npm module provided CLI tools, whereas the Bower package provided the library proper. In fact, try upgrading the npm package only first:

1$ npm install [email protected] --save-dev

My tests pass after that, but the debug messages in the console reveal that I’m still using an old version of Ember Data:

1DEBUG: -------------------------------
2DEBUG: Ember      : 2.0.3
3DEBUG: Ember Data : 1.13.15
4DEBUG: jQuery     : 2.1.4
5DEBUG: -------------------------------

This is fixed by installing the Bower package. Pay special attention to the fact that the version comes after a ‘#’, not an ‘@’, and that the dependency is saved with runtime packages --save, instead of development packages --save-dev. Easy to miss!

1$ bower install ember-data#2.0.x --save

My tests were passing and I was on Ember 2. Yay!

Ember 2.1: problems with error routes

Carrying on with the upgrade, I went for Ember 2.1. It should be just a matter of upgrading the package, right?

1$ bower install ember#2.1.x --save

However, one of my tests stopped working: one that ensured that the app rendered an error route correctly. After some online research, I discovered that this indeed stopped working in the run up to 2.1, and there seems to be no official support. I was rather bummed by this, but fortunately I found a workaround in a thread discussing this issue on GitHub.

I generalised the workaround into a helper and published it as a GitHub Gist, although the example provided has some additional changes that we’ll get to later in this writeup. Right now, for the code just as it was upgraded from 1.13, the example should look like this:

 1/// tests/acceptance/errors-test.js
 2import Ember from 'ember';
 3import { module, test } from 'qunit';
 4import startApp from 'london-waits/tests/helpers/start-app';
 5import errorStateWorkaround from 'london-waits/tests/helpers/error-state-workaround';
 7module("Acceptance | errors", {
 8  beforeEach: function() {
 9    this.application = startApp();
10    errorStateWorkaround.setup(err => {
11      // Return `true` if `err` is the error
12      // we expect, and `false` otherwise
13    });
14  },
16  afterEach: function() {
17    errorStateWorkaround.teardown();
18, 'destroy');
19  },
22test("Something that lands an error", function(assert) {
23  // Do something that would get the user to
24  // an error route or substate
26  andThen(function() {
27    // Assert that the error has ocurred as expected
28  });

After that, I got my tests passing. Now it’s time to upgrade Ember Data:

1$ npm install [email protected] --save-dev
2$ bower install ember-data#2.1.x --save

And that’s all for 2.1.

Ember 2.2: liquid-fire needs upgrading

A simpler one now. My app was using the liquid-fire addon at version 0.21.2. This was not compatible with Ember 2.2. The error I was getting was:

1TypeError: renderNode.state is undefined

Anyway, I upgraded liquid-fire along with Ember itself. These are the lines to run, and the ones for Ember Data 2.2:

1$ bower install ember#2.2.x --save
2$ npm install liquid-fire --save-dev
3$ npm install [email protected] --save-dev
4$ bower install ember-data#2.2.x --save

Ember 2.3: Ember Data is not in Bower any more

For Ember 2.3, just install Ember 2.3 (d’oh):

1$ bower install ember#2.3.x --save

The release notes for Ember 2.3 advise that we upgrade ember-qunit, which is a Bower package. I wasn’t getting any errors on my tests, but did it nonetheless:

1$ bower install ember-qunit --save

As for the release notes for Ember Data 2.3, these inform us that the Bower package is not required any more, as Ember Data is now a full-fledged Ember addon. Also, ember-cli-shims (npm) needs to be upgraded to 0.1.0. Manually remove any references to ember-data from bower.json:

 1diff --git a/bower.json b/bower.json
 2index 214ef1e..8f74eb7 100644
 3--- a/bower.json
 4+++ b/bower.json
 5@@ -4,7 +4,6 @@
 6     "ember": "2.3.x",
 7     "ember-cli-shims": "ember-cli/ember-cli-shims#0.0.3",
 8     "ember-cli-test-loader": "ember-cli-test-loader#0.1.3",
 9-    "ember-data": "2.2.x",
10     "ember-load-initializers": "ember-cli/ember-load-initializers#0.1.5",
11     "ember-qunit": "^0.4.20",
12     "ember-qunit-notifications": "0.0.7",
13@@ -19,7 +18,6 @@
14     "Faker": "~2.1.3"
15   },
16   "resolutions": {
17-    "ember-data": "2.2.x",
18     "ember": "2.3.x",
19     "jquery": "2.1.x",
20     "ember-qunit": "^0.4.20"

And upgrade the required packages at the command line:

1$ npm install [email protected] --save-dev
2$ bower install ember-cli-shims#0.1.0 --save

2.4: an easy one

1$ bower install ember#2.4.x --save
2$ npm install [email protected] --save-dev

Piece of cake.

2.5: changes to selectors in tests

So I upgraded Ember as usual:

1$ bower install ember#2.5.x --save

And tests started failing :-(

I’m not sure where exactly this comes from, but the semantics of jQuery selectors seem to have changed subtly between Ember 2.4 and 2.5, at least in acceptance tests. This affected all my acceptance tests because the click() helper started to fail.

My HTML contains something like this:

1<a href="...">
2  <p>Foo</p>
3  <p>Bar</p>

And my acceptance tests would do this:


Before 2.5, that would have sufficed to follow the link, but now the helper must be triggering the click event differently, so it’s necessary to be more specific. This works:


And since Ember Data is a proper Ember addon, I thought I would upgrade it as such from now on:

1$ ember install [email protected]

2.6: finally!

The last step was easy again:

1$ bower install ember --save
2$ ember install ember-data

Epilogue: updating ancillary files

After all the above, I got the app working on Ember(+Data) 2.6. Although this should be enough, there are still some differences with a proper 2.6 app as generated with ember new. These differences are in the files generated with the application. I decided to update these too to avoid any potential problems in the future.

What I did was generating a new app, then comparing the generated files with those in mine. There are a few differences, for example in app/app.js:

 1@@ -1,6 +1,6 @@
 2 import Ember from 'ember';
 3-import Resolver from './resolver';
 4-import loadInitializers from 'ember-load-initializers';
 5+import Resolver from 'ember/resolver';
 6+import loadInitializers from 'ember/load-initializers';
 7 import config from './config/environment';
 9 var App;
10@@ -25,7 +25,7 @@ Ember.onerror = function(error) {
11 App = Ember.Application.extend({
12   modulePrefix: config.modulePrefix,
13   podModulePrefix: config.podModulePrefix,
14-  Resolver
15+  Resolver: Resolver
16 });
18 loadInitializers(App, config.modulePrefix);

Also there are new files, such as tests/helpers/module-for-acceptance.js, which is used in acceptance tests.

Differences too in package.json and bower.json. I updated the packages that were behind, leaving alone those for which my version was ahead. All in all, I tried to reduce differences with a 2.6 app as much as possible. This included:

This was a slow and annoying process, but it may spare me new problems in the future. Also, I avoided including the addon ember-welcome-page because it’s only useful in new installs, and expected to be removed by the developer after work starts.

After all these changes, my tests failed again, again because of a subtlety, this time in the test helper moduleForAcceptance. In two of my tests, I had a utility function defined as part of the test module:

 1/// tests/acceptance/my-test-test.js
 2module("Acceptance - my test", {
 3  // ...
 4  myUtilityFunction() {
 5    // do useful stuff...
 6  },
 7  // ...
10test("Test case", function(assert) {
11  // ...
12  this.myUtilityFunction();
13  // ...

Turns out this doesn’t work when using moduleForAcceptance instead of module. I had to define the utility function outside and bind it to the module at beforeEach:

 1/// tests/acceptance/my-test-test.js
 2function myUtilityFunction() {
 3  // do useful stuff...
 6moduleForAcceptance("Acceptance - my test", {
 7  // ...
 8  beforeEach() {
 9    this.myUtilityFunction = myUtilityFunction;
10  },
11  // ...
14test("Test case", function(assert) {
15  // ...
16  this.myUtilityFunction();
17  // ...

This got me back on green tests.

After all this, I still have a deprecation notice coming from liquid-fire. There doesn’t seem to be anything I can do about that, so I’ll just leave it there and wait until a new release of the addon fixes it.


Implicit index routes in Ember.js

TLDR: Ember.js implicitly creates an index child route for you when you nest under a route

This appears to be not so well known on the Ember community, and I don’t think it’s documented anywhere. I should have a go at clarifying this on the official docs (through a pull request on the project), but for the moment I’ll dump it here.

This is how it works. Say you declare a route in router.js:


This declaration will get you a route, called lines. Nothing new here. Now let’s declare the same route, but with nesting:

1this.route('lines', function(){});

This is subtly different. We added nesting but left it empty, so there’s no reason to think that the result would be any different. However, there is indeed a difference. This will get you not one but two routes: lines and lines.index. Specifically, it’s the same as declaring the following:

1this.route('lines', function(){
2  this.route('index', { path: '' });

This actually makes a lot of sense. If we think as lines as a “trunk” route and lines.index as a “leaf”, it turns out that trunk routes cannot be “landed”. These routes have an outlet that needs to be filled out. If we try to land on a trunk route, for example using transitionTo, Ember will redirect us to the index leaf route under them, and the outlet will get its content. In other words, these two transitions are equivalent, assuming that lines is a route with nesting:


These implicit index routes are implemented with an empty route module and an empty template. We don’t notice any of this, but we can verify it by using the Ember Inspector, where we can see these routes, along with loading and error subroutes:

Implicit routes showing up on the Ember Inspector

Implicit routes showing up on the Ember Inspector

This can go several levels deep. If we explicitly declare an index route with nesting, Ember will declare yet another index under it:

1this.route('lines', function(){
2  this.route('index', { path: '' }, function() {
3  });

If we try to transition to lines, Ember will take us to lines.index and then in turn to lines.index.index. This can go on for as long as necessary, until a leaf route is found where we can land safely.


Large files on a custom Debian installer

On I wrote about a better, simpler way of doing this. Check it out at Third party files on a custom Debian Live installer.

Following with my work on a custom Debian distribution, there was a third-party package I wanted to include in it. Unfortunately, this was proprietary software, it wasn’t included in Debian non-free, and it was a pretty large download (about 1GB).

Debian Live doesn’t seem to have a way to handle a case like this, or at least one that doesn’t involve significant drawbacks.

Initial approach

A first approach would be to download the package and store it with the build scripts, handy to be installed during the build process. The problem with this is that I keep my build scripts under source control in GitHub. Adding a 1GB file to a git repository is generally not a good idea, less so when that repo weighs under 1MB otherwise (this is including all commits in its history). There’s also the matter of whether the software license would allow uploading it to GitHub or a similar service, which legally could constitute unauthorised distribution.

What would be better is to only keep a reference (a URL) to this package, downloading it during the build process. Here the problem is that the package will have to be downloaded every time we run a build; a large download that significantly slows down the process (already slow) and makes it more cumbersome to iterate quickly.

Debian Live keeps a cache directory where it keeps the Debian packages that it downloads during a build. It should be possible to download our large package there, and avoid re-downloading if it exists already… except that the cache directory is not accessible from the chroot jail where custom scripts are run. It’s from one of these scripts that I would run downloaded file (if it is an installer) or otherwise put it in the correct location, so access from these is necessary. Well, bummer.

I also tried keeping a separate cache directory next to where the chroot hook scripts are kept (that would be config/hooks/cache), but those scripts don’t appear to be run off that location, and again that file hierarchy doesn’t appear to be accessible.

My solution

At the end I went for something a bit more involved. I altered the build scripts to add the following:

In other words, although the chroot jail doesn’t allow us to copy files from the external filesystem, it doesn’t stop us from accessing files that are served over HTTP from that external filesystem.

The scripts

This what my build script looked like after adding this feature:

 1#!/usr/bin/env sh
 3set -e
 5BASE_DIR=$(dirname "$0")/..
12mkdir -p "$DOWNLOADS_DIR"
22lb build noauto "${@}" 2>&1 | tee build.log
24PID=$(cat "$PID_FILE_PATH")
25kill "$PID"

For the final implementation, I divided the files I wanted to copy into two categories: large ones I needed to download from the Internet, and smaller ones that I was ok with adding to the repository. On this second group there can be file checksums, customised config files, and probably other things.

There is a “downloader” script, referenced as $DOWNLOADER_PATH that reads a list of files ($DOWNLOADS_LIST_PATH). For each entry, if it appears to look like a URL, a file is downloaded from the Internet. Other entries are expected to be files residing at $LOCAL_FILES_DIR. All files are copied to $DOWNLOADS_DIR, renamed as per indicated in each entry of the list.

A server script ($SERVER_PATH) is then run, and we take note of its pid. This way, we can cleanly shut it down after the build is done.

This is the downloader script:

 1#!/usr/bin/env sh
 3while getopts ':d:l:' opt; do
 4  case $opt in
 5    d)
 7      ;;
 8    l)
10      ;;
11  esac
14shift $((OPTIND-1))
17cat "$DOWNLOADS_LIST_PATH" | while read -r NAME URL; do
19  if echo $URL | grep -qE '^https?://' ; then
20    wget -c --output-document "$DEST_PATH" "$URL"
21  else
23  fi

As mentioned before, it’s fed a list of files ($DOWNLOADS_LIST_PATH) that may refer to URLs or local files (under $LOCAL_FILES_DIR). Whatever is downloaded, it will be copied to $DOWNLOADS_DIR. Using wget’s -c option, we ensure that failed downloads are continued, and existing files are not re-downloaded.

This is an example of a file list:

2small-file small_file_kept_locally

For each entry, the first word before the space is the name that the downloaded/copied file will receive at the cache directory. The second word is either the URL to retrieve it from, or the name of the file in the local filesystem.

This script runs the http server:

 1#!/usr/bin/env sh
 3while getopts ':P:' opt; do
 4  case $opt in
 5    P)
 7      ;;
 8  esac
11shift $((OPTIND-1))
15python -mSimpleHTTPServer 12345 &
17echo $! > "$PID_FILE_PATH"

Nothing much to see on it, apart from the option used to specify a pid file. This will be used to shut down the server cleanly at the end.

Finally there’s the chroot hook that will retrieve the files “over the fence” of the chroot jail and actually install them. It will be something like this:

 3set -e
 5WORKING_DIR=$(mktemp -d /tmp/tmp.XXXXXXXXXXXXX)
 8wget http://localhost:12345/big-file
 9wget http://localhost:12345/small-file
11# Do something with the downloaded big-file and small-file
12# ...
14# Finally we clean after ourselves
15rm -rf "$WORKING_DIR"


Hash syntax in Ruby 2.2

Twice in the last three weeks, I have stumped into a relatively recent addition to Ruby’s syntax: a new way to define symbol keys in a Hash literal.

Since Ruby 2.2, the following syntax is legal:

1{ "foo": "bar" }

The weird thing here is that the above literal doesn’t mean what I thought it meant. This is what I expected it to be equivalent to:

1{ "foo" => "bar" }

And this is what it actually is equivalent to:

1{ :foo => "bar" }

I have to admit that both meanings are perfectly reasonable in their own way. To me, it was annoying because I copied over some JSON into my code and forgot to change the colons : to hash rockets =>. My tests started failing and it took me a bit to realise what was going on. An older Ruby would have given me a syntax error and I would have realised straightaway.

Another reason I find it surprising is because, since the introduction of the colon-based syntax for hashes in Ruby 1.9 (eg: {foo: "bar"}), I have had some opportunities to see mixes of colons and hash bangs such as this one:

2  foo: "bar",
3  1 => "baz",
4  "uno" => "dos",

I was hoping that, eventually, Ruby would get a syntax compatible with JSON’s, so that I could just copy the latter into the former without the problem described above, just as you can in JavaScript. The syntaxes of both were not compatible but didn’t have any conflicts, and Ruby just needed extending the colon syntax to values other than symbols. However, the change in 2.2 eliminates this possibility, as it introduces an element that means different things in each format.

I have no idea of what the exact line of thought was in Ruby’s core team when they decided to introduce this (I haven’t been able to find a relevant link). However, I did find a defence of it on StackOverflow.

The argument goes that symbols need to be defined with quotes when they have certain characters—eg: :"strange-symbol". Therefore it makes sense that these two keys both are interpreted as symbols:

2  normal_symbol: "foo",
3  "weird-symbol": "bar",

Which ok, I admit also makes sense, but it sure is weird to me.


Memory upgrade for an MSI Wind U135

I own an inexpensive netbook that I take with me when I travel. Specifically, an MSI Wind U135. It’s been serving me well for almost 5 years, and the other day I decided to upgrade it a bit, by adding some memory.

After some online research, it was clear I could only upgrade from the builtin 1GB to a total of 2GB, adding a 1GB memory module. What was not so clear though is what specific model of memory I needed. This netbook uses DDR3 SDRAM, but it’s not as easy as just buying that:

On the BIOS setup menu, it lists the “System Bus Speed” as 667MHz. The tool dmidecode confirms this, but doesn’t add anything to it1:

 1$ sudo dmidecode --type memory
 2# dmidecode 2.12
 3SMBIOS 2.6 present.
 5Handle 0x0029, DMI type 16, 15 bytes
 6Physical Memory Array
 7        Location: System Board Or Motherboard
 8        Use: System Memory
 9        Error Correction Type: None
10        Maximum Capacity: 4 GB
11        Error Information Handle: Not Provided
12        Number Of Devices: 2
14Handle 0x002B, DMI type 17, 28 bytes
15Memory Device
16        Array Handle: 0x0029
17        Error Information Handle: Not Provided
18        Total Width: 64 bits
19        Data Width: 64 bits
20        Size: 1024 MB
21        Form Factor: DIMM
22        Set: None
23        Locator: A1_DIMM0
24        Bank Locator: A1_BANK0
25        Type: <OUT OF SPEC>
26        Type Detail: Synchronous
27        Speed: 667 MHz
28        Manufacturer: A1_Manufacturer0
29        Serial Number: A1_SerNum0
30        Asset Tag: A1_AssetTagNum0
31        Part Number: Array1_PartNumber0
32        Rank: Unknown

This would narrow down the search to one of the following models listed on Wikipedia (only relevant extract of the table shown):

JEDEC standard modules (extract)
Standard name I/O bus clock Module name Timings
666.67 MHz PC3-10600 7-7-7

At this point, I had no idea of whether any of the listed 4 types was ok or I should find out which exact one I needed. I know next to nothing about hardware, and I wasn’t going to risk it. I needed more information.

I opened up the netbook and looked inside. The builtin memory module is easy to find: four black rectangles lined up in front of the empty upgrade slot:

The memory module, built into the motherboard

The memory module, built into the motherboard

Each one of the black rectangles has an inscription with details about the manufacturer, as well as the part model and number. It’s tiny and difficult to read. A picture can help to make out the text. In my case, they look like this:

A single memory chip, inscribed with model parts and numbers

A single memory chip, inscribed with model parts and numbers

That’s a mouthful, but it makes for a starting point. I decided to look up “H5TQ1G83BFR” first. I got a few links as a result, but the one that was most interesting was a PDF with technical details of the products offered by the manufacturer. The first page lists three product lines, one of which is “H5TQ1G83BFR-xxC”, where the “xxC” part would match up the inscription “H9C” on my chips. Promising!

As mentioned, I know nothing about this topic, so the document could be in Klingon for all I can understand. Thankfully, I just needed part numbers. Since the document seems to cover the “xxC” lines, and I own an “H9C”, I looked up “H9”, and found exactly what I wanted (again, an extract for the relevant table):

Operating Frequency (extract)
Speed Grade (Marking) Remark (CL-tRCD-tRP)
-G7 DDR3-1066 7-7-7
-H9 DDR3-1333 9-9-9

According to this, my netbook uses DDR3-1333 9-9-9 which in turn, according to Wikipedia, means the exact spec is DDR3-1333H. With that information (and remembering I was limited to buying 1GB of memory), I could finally go online shopping with enough confidence that I was getting the right product.

  1. Info on using dmidecode at HowtoForge


Ruby: difference between __method__ and __callee__

Just yesterday I learned a little something in Ruby that I found interesting. It’s the difference between __method__ and __callee__.

Both are supposed to return the name of the current method, but there’s a subtle difference:

I put together an example that illustrates the difference:

 1class Bar
 2  def foo
 3    puts "__callee__ is #{__callee__}"
 4    puts "__method__ is #{__method__}"
 5  end
 6  alias :baz :foo
 9bar = # => Prints "foo" and "foo"
11bar.baz # => Prints "baz" and "foo"

This illustrates very well the difference between “message” and “method”. One is the name we use to invoke a method. The other is the name of the method ultimately called.




Proprietary drivers on a custom Debian installer

Recently I had the need to create a custom GNU/Linux distribution. It would install the system in a computer, along with a few handpicked packages, all with minimum fuss. Luckily enough, there’s software for this: Debian Live.


Image by Debian

CC-SA License

The project’s website is not initially the most inviting, but it doesn’t take much digging to find the documentation of the project. Just following the section titled “For the impatient” gets you up and running in no time, creating a customised installer of Debian GNU/Linux. The rest of the documentation covers a number of advanced cases.

Proprietary drivers during installation

However, I did come across a stumbling block. Some of the laptops where I have to install this require a proprietary driver for their WiFi card. The problem here is not getting these drivers installed: instead, the problem is getting the card to work during the installation process itself, so that updated packages can be downloaded if desired.

Initial attempt

Reading the documentation, it’s straightforward to see how to get the drivers working after the installation. Simply add the name of your package to one of the “chroot” package lists:

1# config/package-lists/firmware.list.chroot

And make sure you are fetching packages from the contrib/non-free repositories if required:

1# auto/config
2set -e
4lb config noauto \
5    --archive-areas 'main contrib non-free' \
6    "${@}"

So that’s simple. Now, how to make this work for the installation process too? On further reading, I am led to think I only need a “binary” package list referencing the same package but, much as I try, that leads nowhere. The required firmware doesn’t end up in the expected place

The solution

Finally I figured out a way. During the build process, and after the firmware is unpackaged on the “chroot” tree, I can have a hook (see section 9.2 of the manual) copy the files across to the “binary” tree. For this I just needed this file (as well as the two listings above):

1# config/hooks/firmware.binary
3cp -pr chroot/lib/firmware/b43 binary/firmware/

And that is this problem solved! I’ll note that the installation process still asks whether you want to supply firmware from the wayward device. In my case, it asks three times, each time mentioning different required files. Just go “Yes” until it’s satisfied. Of course, if it always asks about the same files, it’s probably because something went wrong and it can’t actually find them.

A way to avoid this question asked at all is using a preseeded configuration:

1# config/includes.binary/install/preseed.cfg
3d-i hw-detect/load_firmware boolean true

That should make it clear that yes, we want that firmware loaded and there’s no need to ask so many questions.

A working example

A nice feature of Debian Live is that it’s all configured using just a few text files. Very convenient to track your progress with version control. I published the config files for my distro on Github if you want to check it out. At the time of writing these lines, it is a working example of the technique described above:




Fishy-looking ATM

Last month I spotted the following ATM card slot, outside a branch of HSBC:

The ATM slot as I found it

I don’t know about you, but to me that looks fishy as hell:

  1. The card reader is a separate, protruding, differently coloured piece of plastic
  2. The surrounding surface shows clear lacerations from tool work

Additionally, it looked very much like the one described in a blog post that I had seen recently, by Paul Battley. Please read this account of the story, it’s not long:

I went into the office and reported my suspicions, however these didn’t get the treatment I expected. The person who I talked to simply smiled, nodded, and showed no sign of caring in the slightest. The most I got was an (I felt) unconvincing assurance that the matter would be looked into at their earliest convenience. The day after nothing had changed. I then phoned HSBC, but got the very same treatment.

Two weeks after this incident, the card slot remained the same, so I could only assume there was no skimmer device. Then another week later, the slot finally changed, but didn’t improve the situation:

Same ATM slot, some weeks later

Interestingly, Paul Battley’s situation had been handled similarly. In his case it was with RBS as the bank, and without the final slot change:

Paul wonders whether we are screwed because ATM technicians can’t spot skimmer devices, or doomed because safe ATMs can’t be told from tampered ones. After my experience, I have to lean towards the second option.

This also makes me wonder to what extent banks are aware of this problem. Is this a case of the machine looking so fishy that people walk into the branch everyday to report suspicion? If that’s it, I can understand why staff don’t seem to care.

I also have reservations about how this problem is treated at a higher level: when I called HSBC, the person on the other side showed no interest either. I was actually asked for the address where this had happened, but it didn’t sound at all like they were typing it down anywhere. I know because my strong accent forces people on the phone to ask again for names and directions, and I always end up spelling them out. None of this occurred in this call, despite my ineptitude at pronouncing the word “Clerkenwell“.

I can see the bank’s side of the argument though. The one obvious way to get this right would be for the ATM’s front to be made of a single piece, with minimal protuberances or holes. I understand this can get expensive if they have to replace all machines across the country. I can’t tell how expensive though, and how this affects the business’s bottom line. Of course as a plain citizen who mistrusts banks and financial institutions by default, I’d tend to think they should bloody well get their asses in gear and solve this problem as this should be just peanuts from their executives’ bonuses, made up from money they swindle from us in a daily basis. But I don’t have the hard numbers, so I won’t go into that argument just yet.


Closing a FirefoxOS app

The documentation for FirefoxOS doesn’t make it obvious how to completely close an application. Makes some sense, since their view seems to be that apps are simply web applications, and should be always on. However, I don’t think this always applies.

Fortunately, there’s a way to do what I want:


My use case is the following. I am building an app for FirefoxOS that makes extensive use of fine-grained geolocation. GPS uses up a lot of battery, and it’s not trivial for the app to tell when to use it and when not. Therefore, I need a way for users to close the app, safe in their knowledge that it won’t be draining resources in the background.

I have built a little proof of concept that I have published on Github. It works on my FxOS 1.1 phone (ZTE Open).




Dual flush: interface design gone down the drain

For some time now, I have been thinking about dual flush toilets. I find them annoying, or rather: I find their interface annoying.

Their intended purpose is good: not all visits to the toilet require the same amount of water to flush, so why empty a whole cistern every time? Dual flush allows us to save water, which is good for both the environment and our pockets.

However, every time I see one, I am at a loss as to how to correctly operate it. Is is the large portion of the button that I have to press or is it the small one?

Regional disclaimer: In Europe, every dual-flush I have seen has a button-based interface. I hear there are lever-based ones in other parts of the world, but I cannot comment on those.

Initially, it would seem reasonable that the large portion is intended to release more water. That sounds simple enough, right? Well, no, because the small portion is often so small or narrow that it is difficult to press it without accidentally pressing the other one in the process.

Therefore, thinking that the opposite is true would also make sense. After all, if you are going to press one of the buttons accidentally, it better be the one that uses less water. You’d then make this be the larger button. Then the smaller button would be pressed by users who are certain they require to spend the whole cistern.

I have been so bothered with this lately that I have started to study them. Unfortunately, this forces me to spend more time in the toilet when I am in a new place, but it’s all for science! :-)

Whenever I see a dual flush, I bring out the chronometer on my mobile phone. I press one of the flush buttons, time how long it takes to operate, wait for the cistern to fill back up, and try again with the other button. Normally several times, just to make sure.

Funny thing is: I have barely done any progress because most of the time both options seem to take as long to operate! So not only the buttons are badly designed, but also the feedback (the appearance of the resulting stream of water) is all wrong and hardly gives any information as to what actually is going on.

Not all designs I have seen are as bad, but in general the landscape in this field is pretty depressing. The best designs I have seen are not really good, but just less bad. For example, recently I saw this one at a restaurant:

In this instance, pressing the lower portion resulted in a short flush, whereas pressing the upper portion resulted in a long flush.

I am not sure if you can appreciate it well in the picture, but the lower portion is bulging out, whereas the top one has a hollow shape, as if sinking away from the user. This makes the lower portion easier to press. Additionally, there is one clear mark on the lower one and two on the upper one, but easy to see and touch evident. Also, the upper portion is slightly larger, as the lower portion has the typical crescent shape, but less pronounced than in other examples. This should follow the idea that the larger surface creates a longer flush.

However I think this is not good enough. The size of each portion is still small, making it accident-prone. Less so that other common designs though, but when you press a button and it sinks in, it’s not trivial to avoid pressing the other one.

Also, I wonder how evident the meaning of the markings is. In my mind it makes sense that two markings mean “more” than one, and therefore should release more water, but what do other people think? After seeing how bad these interfaces normally are, I wonder if their creators ever bothered doing any user testing at all.

But anyway, I’ll continue doing my research. To some it may seem silly toilet humour but, for something that plays such an important role in our lives, I think it’s important to get it right.


My first impressions of Netflix

Yesterday I signed up for Netflix. Before now, I had used it at other people’s homes, but I had not signed up myself because I barely watch series/films at home. But I finally caved in last night, and now I am officially impressed.

Always using my Android tablet, I created an account, then downloaded their app where I browsed their selection and watched my first show. The process went flawlessly, and I have commend them for the experience that they have managed to deliver, at least to a first timer like me. After this start, there are only three minor things, and a relatively major one, that I want to comment on.

First, I found funny that they ask users straight away what kind of content they want to watch: drama, comedy, horror, etc. I skipped that part because I just don’t know! I want to watch good shows regardless of their genre. I understand they want me to start feeding their suggestions engine, but I don’t want to restrict my choices just yet.

The second thing was that, at the beginning, I feared that a Facebook account was required to register, as it used to be the case with Spotify. Fortunately, after reading a bit more I saw this was not the case. I am definitely not going to link this up with my Facebook account. I often wonder why I have a Facebook account in the first place, as I rarely use it, but that’s another story for another day.

Third: I was also asked if I was going to use Netflix on (if I remember correctly) my PC, my Mac or my games console. Funnily enough this list doesn’t include tablets, even though I was signing up from one.

But anyway, I don’t think the above are too important. I’m going to move on to the “relatively major” thing, that I must say almost put me off from Netflix altogether: the permissions required by the Android app. Why on Earth does the Netflix app require permission to:

I am very concerned about my privacy, and I rarely let these things slip in. I let this one in because I really wanted to try the service.

The phone identity I assume has to do with limiting the number of devices that access the same account, so I don’t give my login details to my friends to use Netflix for free. I do think this is complete bollocks, but I guess this has to do with their contracts with the studios.

The “sensitive log data” permission… I have no clue why that is needed. I will come back to this and see what the repercussions are and what I can do about it, but I am honestly uneasy about this. I hear there ways to get better control of apps’ permissions, so I’ll have to take a look at that at a later moment.

And one bonus comment now. While writing this, I logged onto the Netflix website with my laptop, and clicked on a show. I was mortified to see that the player requires Microsoft Silverlight. I have been avoiding to install it since it first came up, and I’d like to stay clear of it for the foreseeable future.

I understand their contracts with the studios require them to use DRM, and this cannot be achieved with HTML5. I also realise that it is a bit funny that I would not have complained if it had required Flash instead. I like Flash (not) as much as I like Silverlight (not), but the first one crept up into our computers long time ago, so I am not so fussed about it now (and I block it on my browsers anyway).

But enough of negativity for now. The Silverlight problem I’ll have to accept, and the Android permissions problem… I’ll see what I can do. Apart from these two points (and the other minor ones above), I feel I must congratulate the Netflix team for delivering such an awesome experience. This is the future. I am glad that finally somebody was able to pull off something like this. This is the best way to combat piracy: to offer a better alternative.




I don't own a smartphone

I don’t own a smartphone. Not even a sort-of-smart phone. Instead I use a dumbphone, a Nokia 1100 to be precise, that I have been using for over a year now.

It’s 2012, I am a tech-savvy person who builds web applications for a living: why would I want to use a dumbphone? I have the following reasons not to want a smartphone:

Of course this is a tradeoff. Smartphones are a great technology, and mobile apps can be really useful. But honestly, I really don’t see a need for them in my life. Incidentally, I find this makes me more organised: I check maps, timetables, and any other information in advance. I arrive in places knowing what to do, because I have done my homework. I am not so dependent on technology as users of smartphones seem to be, so often checking their maps, social network statuses (this one really irks me), calendars and God knows what.

I don’t need any of that. I want a simple phone. Mine gives me this:

Of course, this doesn’t guarantee that I won’t end up owning a smartphone one day, although I am not in a rush. For now, I would like to wait until I see a reasonably-priced, small terminal with battery life of at least 3 days.


Browser privacy

For some time now, I have been using the private browsing mode (or “incognito mode” or “porn mode”) of my browsers for casual usage. I feel wary of the incredible amount of information that I can let out about myself if I do otherwise.

Real-world cases

I had been aware of this for some time, and we have had some real world examples. For example, in 2006 AOL released search data for research purposes. All published records were anonymised, by means of showing user IDs instead of real names. But it was still easy to recognise many people by the things they were looking for, as well as group up their searches and know more about them.

Something similar happened to Netflix in 2008. As part of a data mining competition, they published their logs of film viewing. Then somebody thought of cross referencing them with rating data posted by users on IMDB. They were able to find matches for many of those.


But anyway, thing is, this really struck me this day when I was using Google Chrome and I entered a URL on the address bar (the Omnibox as they call it). It was then that I realised that Chrome was trying to autocomplete what I entered.

Autocomplete a URL. Think about it. It was not even an attempt to search, but a URL. Chrome was asking Google for candidates to autocomplete what I was entering. As part of the process, Chrome was telling Google exactly what I was entering on the address bar, real time. This meant that I didn’t even have to explicitly perform a search on Google for them to know what was the next website I was going to visit.

By the way, just in case it was autocompleting from the browser’s own history or something like that, I checked. The following screenshot shows the requests that my browser fired when I was entering a URL on the address bar. (By the way, I used Charles Web Debugging Proxy for this. Pretty nifty tool).

Chrome telling Google exactly where I intend to be. Potentially cheeky!

Chrome telling Google exactly where I intend to be. Potentially cheeky!

Chrome telling Google exactly where I intend to be. Potentially cheeky!

Of course I could turn this specific feature off, but that’s not really the point and you know it. People don’t change their defaults, and I mean the average user of the web, not you and not me1. Oh, and of course this doesn’t save me from the Netflix scenario. There’s not really much helping against that.

Indirect consequences: The Filter Bubble

Some will still say “Well, I don’t care about my personal data, blah, blah, doesn’t affect me, blah, blah”. Well, it affects you in more ways than the evident. In fact: it is already affecting you in a very subtle, yet dramatic way.

Some time in 2011, I saw this short TED talk were Eli Pariser introduced (at least to me) the term “The Filter Bubble”. Go have a look, it’ll take only 9 minutes of your time and is very enlightening:

If you didn’t have 9 minutes, don’t worry, this is the skinny of it:

Initially, that’s all good and well but: what about things that are important, but you just don’t check that often? Or what about challenging opinions? Will I just be told what I want to hear instead of what is actually true, or at least unbiased or just from a different point of view?

In one of the examples in the talk, conservatives were being given only conservative links, and similarly with progressives. All this based in data such as search history, links clicked on Facebook or what have you. At the end of the day, people would believe that this information bubble they live is a reflection of the world as it is. However, it’ll be only a reflection of the world as they would like it to be. Wishful thinking.


I don’t think there’s much of a solution, no. Logging of habits that takes place while you are not logged into a site, those can be solved to an extent by browsing anonymously, in incognito mode. However, for browsing that happens while you are logged into Facebook, GMail… there’s no easy way out.

Only thing you can do is keep all this in mind when you use the Internet. And by the way: is really Facebook the best way to use your time? Just sayin’.

  1. Well, not even me to be honest. For instance: I use MacOS X Lion at work and Snow Leopard at home. On neither of them I have changed the default mouse scrolling. I do have this inclination for using standard configurations for everything I use (with some exceptions). 




ActiveRecord: getting a backtrace of your SQL queries

The other day I had some strange database queries in my Rails log. I didn't know what was originating them, so I set out to track down their origin.

Thus I found out about ActiveSupport::Notifications. It is used by Rails to notify about events ocurring inside the framework, and is used to generate the standard logs that we see in our applications. We can turn them to our advantage to solve the problem at hand.

We can tap into these notifications to find out what is generating those DB queries. To do this, we print out the backtrace of the program when we get notified of an SQL query happening. Like this:

 1# lib/query_tracer.rb
 2module QueryTracer
 3  # Borrowed some ANSI color sequences from elsewere
 4  CLEAR = ActiveSupport::LogSubscriber::CLEAR
 5  BOLD = ActiveSupport::LogSubscriber::BOLD
 7  def self.start!
 8    # Tap into notifications framework. Subscribe to sql.active_record messages
 9    ActiveSupport::Notifications.subscribe('sql.active_record') do |*args|
10      QueryTracer.publish(*args)
11    end
12  end
14  # Notice the 5 arguments that we can expect
15  def self.publish(name, started, ended, id, payload)
16    name = payload[:name]
17    sql = payload[:sql]
19    # Print out to logs
20    ActiveRecord::Base.logger.debug "#{BOLD} TRACE: #{sql}#{CLEAR}"
21    clean_trace.each do |line|
22      ActiveRecord::Base.logger.debug "  #{line}"
23    end
24  end
26  def self.clean_trace
27    Rails.backtrace_cleaner.clean(caller[2..-1])
28  end

To make this work, we only need to require the module from anywhere appropriate. An initializer would be a good place, for example:

1# config/initializers/query_tracer.rb
2require 'query_tracer'

Now, for each SQL query generated within ActiveRecord, we will get the following output in the logs:

1 **TRACE: INSERT INTO \`user\_words\` (\`created\_at\`, \`updated\_at\`, \`user\_id\`, \`word_id\`) VALUES ('2011-11-27 18:33:53', '2011-11-27 18:33:53', 2, 2)**
2  app/models/user.rb:18:in `block in add_word'
3  app/models/user.rb:12:in `tap'
4  app/models/user.rb:12:in `add_word'
5  app/controllers/words_controller.rb:6:in `create'

And that's it!


Amazon's Kindle customer service

Image by NotFromUtrecht

CC-BY-SA License

Two weeks ago, I lost my Kindle. I had it at my brother's place, but then I didn't have it on the tube. No clue of what could have happened in the middle.

I was already thinking of buying a new one, when I got a call on my phone. It was an Amazon customer services rep. They had my Kindle. Somebody had returned it to them.

Oh, and they were shipping it back to me for free. It's back with me now.

Another related story. This is actually my second Kindle. The first one had a problem, half the screen was dead. I called Amazon at 6pm, and I got a replacement by 10am the next morning. For free again. They also arranged for a courier to collect the broken one any time that suited me.

Honestly, it makes me feel a bit bad that I have only bought one Kindle book from them since I first purchased the device. The rest have been books I downloaded from Project Gutenberg and such.

It's just that I hate DRM. Please publishers, sell more DRM-free books :-(

Everyone else: buy the Amazon Kindle. It's an awesome device.


Eclipse doesn't let me to findViewById

Yesterday I did my first bit of Android development ever. I created a very simple program that simply updated some text on screen.

I had a problem though. On my Activity, I had the following piece of code:

1public void updateText(View view) {
2  TextView t = (TextView)findViewById(;
3  t.setText("Updated!");

And the following bit ox XML in my layout (inside a LinearLayout):

2  android:id="@+id/button"
3  android:layout_width="wrap_content"
4  android:layout_height="wrap_content"
5  android:text="@string/button"
6  android:onClick="updateText"

Eclipse insisted that was illegal. Specifically, the error was id cannot be resolved or is not a field, even though the button did exist in the view.

The problem was that I had not yet defined the string "button" in my strings.xml file:

1<string name="button">Click me!</string>

Not sure of the exact internals, but I guess that Eclipse was trying to compile the layout, and it failed because the string was missing. In turn, this showed an error in the activity, because it was using a broken layout.