Xcode has a little known feature that can save you time, frustration, and ultimately can even improve code quality. Templates are what Xcode uses to create new projects and files — when you create an “Objective-C Class” Xcode is just populating the Objective-C class.xctemplate with your options. While Xcode is stingy about allowing us to create full blown plug-ins, creating new templates is a snap.
An xctemplate file is actually (as so many things are in OS X) a folder containing the important bits. The most important part of a template is the TemplateInfo.plist file; it contains the settings and information that Xcode needs to make sense of the other files you provide. Typically you’ll want to have some other files named something like ___FILEBASENAME___.m which would create a file renamed to, for instance, MyClass.m once the template is run. There’s no end to the number of other substitutions you can use, but they are far less useful and I’ll leave it as an exercise to the reader to learn more about them.
Most of the keys in this file are self explanatory, things like Description and title. The most useful key and the focus of today is the options key; it holds a dictionary that allows you to request information from the user in order to tailor how Xcode uses the template when creating the new files.
<key>Options</key> <array> <dict> <key>Description</key> <string>The name of the class to create</string> <key>Identifier</key> <string>productName</string> <key>Name</key> <string>Class</string> <key>NotPersisted</key> <true/> <key>Required</key> <true/> <key>Type</key> <string>text</string> </dict> </array>
This is a very basic but very real options dictionary, it’s the one straight out of the included template for simultaneously creating view & controller. It’s going to tell Xcode to present the following screen to the user:
Not exactly rocket science if we’re honest, but what that will do is give us access to a text variable in our template named productName (which automatically corresponds to ___FILEBASENAME___). You can go a lot deeper with both TemplateInfo and the options array, and I will in a future blog post, but this is where we stop for today.
You’re almost certainly going to have one of these files in your template, or at least something very similar. There isn’t much to it, Xcode will create a file with the name that was provided substituted in and whatever contents are in the template file will be in the new file (with appropriate substitutions). Here’s a sample file from the included template for view & controller:
That’s a lot of filebasename. What this does is provide a base implementation for a ViewController (for the view class ___FILEBASENAME___) with an overridden loadView to load the correct view subclass into the view controller. I’ve even included the check to ensure that we don’t clobber the view-to-be-loaded from a nib/storyboard unpacking. The rest of this template is provided for free use below.
Go, Become a Template Master
So I’ve only given you a preview here, and there is more on the way soon — much more! But if you want to do some learning right now, you can check out all of the iOS templates provided with Xcode for Projects and Files right on your own machine. You can find them at
Finally, a link to my file templates repository, which you are free to do with as you please. Contributions are of course welcome. Coming soon is a template that aims to simplify and standardize code reuse.
I’m going to take a break from my blog posts on metrics, and was thinking I’d focus more on process. An Iteration Planning Meeting (IPM) is core to an agile process and provides the opportunity for the product owner(s) to communicate the vision for the upcoming Iteration.
I’ve sat in on enough IPMs to realize that they’re all unique snowflakes, but here’s what the agenda for an ideal IPM looks like to me.
PM as Facilitator
The Product Owner/Manager should run an IPM. It’s his/her job to ensure that everyone involved with the product development process knows where we just came from and where we’re going. To that end, I like to start by “walking the wall” to look through any stories still in progress from the last sprint, along with any pertinent information about features just finished. This should be a quick process to jog everyone’s memories and ground them in the work that’s coming up.
Invest in Stories
Starting at the top of the backlog, step through each story; the INVEST mnemonic is useful in confirming that they’re shovel ready. Talk through the user-facing value for a feature, ensure that any comps, wires, assets, flows, data, etc. are attached to the story. Confirm that the requester marked is the person accepting the story and clarify any acceptance criteria. The goal here is to be crystal clear on when a feature is “done done.”
If it doesn’t already have an estimate, each developer that could work on a story should roll on the points to deliver. If the implementation is not clear, they should have time to talk through approaches. That said, their role is to nail down a level of complexity, not pin themselves to a specific technical implementation. Based on the estimate size or developer feedback, stories can be nominated for merging or splitting up. If that’s the case, capture the pieces of work as placeholders and update with details post-IPM. The worst thing you can do in an IPM is not respect everyone’s time during this kind of housekeeping.
Paying Down Debt
A healthy development process will incorporate refactors and tackle technical debt in concert with new user value. In addition to explicit tech debt chores, PMs and developers should look for opportunities to wrap this work into feature development. For example, if a story calls for adding a field to an existing form you should consider also cleaning up the logic that delivers form validation errors. [Giving canned examples of identifying technical debt is hard -- please forgive this one!]
Two Iterations Max
Tracker will chunk stories into iterations based on Velocity. You should only step through stories until you’ve got two sprints worth of estimated work. I like to keep the visibility to two weeks to cover for quicker than intended delivery of features, and to limit the IPM to a reasonable amount of upcoming work. It’s taxing to keep the mental inventory of features in your head; sticking with the short term future focuses everyone on the team around tangible new features.
Block and Promote
If a story is blocked, mark it as such and add a comment with what will unblock it. If a story won’t reasonably become unblocked during the sprint, I’ll move it to the Icebox to be sure the Backlog only reflects actionable work. Similarly, this is a good time to call for nominations for bugs/chores/features that should be prioritized out of the Icebox.
Short and Sweet
Once you hit an hour of IPM, developers zone out and business owners get antsy about the emails they’re missing (or worse, they whip out their phones). You can and should be able to limit sidebar discussions to stay focused on one story at a time. When you have a large team, it’s especially important to play time cop. If you don’t have a healthy Backlog and find yourself with a lack of new work, end the meeting. It’s far better to hold an emergency IPM two days later than to suffer the pain of making up stories in real time!
These are not hard and fast rules, but chances are if you sit in one of my IPMs I’ll focus on these pieces. What’d I miss? What parts of an IPM are essential to a successful sprint?
Kill the Icebox and bump up the font size.
Use the Tracker Header bookmarklet to get some more real estate
Check team strength and set accordingly in Pivotal Tracker
Use screen sharing (i.e., join.me) for any remote participants
Turn off all screens unless they’re in support of the IPM (note taking, clarifying comments in Tracker)
You have a fresh machine. But when you log in, you see a link on your desktop to VMware Shared Folders. You drag it to your Trash. Next time you log in, you see it again: VMware Shared Folders. Again, you move it to the Trash. The third time it happens, you wonder, “What the heck is going on? How can I permanently delete VMware Shared Folders?”
The most likely cause is that your machine was cloned from an image that was created under a VMware Fusion instance that had VMware Tools installed.
The Easy Fix
The easy fix is to Uninstall VMware Tools. It can be found in /Library/Application Support/VMware Tools.
The Hard Fix
Log in to your workstation and run these commands; they should fix the problem:
Just wanted to highlight a couple features that @brysgo and I contributed to these libraries using our beach time. They’ve since been accepted and merged.
First, backbone.js. We made a very simple change, one that hopefully makes a big difference to some — routes used to only take a hash but it can now take a function as well! For an example of how to use this, please check out the test. Why would you want to do this? The primary motivation was that it allowed for an inheritance-like scheme for defining routes (which is clearly illustrated in the test). For a full discussion, please see the thread. There is a very similar mechanism for defining view’s events using either a hash or a function, so we simply made routes consistent with that.
Next, underscore.js. We added a subtle new feature to the throttle() function where it takes an optional boolean flag indicating whether the throttled function gets called immediately at the start of the interval or at the trailing end of the interval. If you’re familiar with how the debounce() function works, then it has the same effect as its third parameter.
Any thoughts on these features? Would love to hear from you especially if you end up using either of these!
Many Ruby gems are packaged with a Rails generator that generates a configuration file. Keeping these gem configurations up-to-date can be much easier if you take advantage of these generators during the upgrade process.
Installing a gem that has configuration
One example is simple_form, a gem that makes it easy to create and maintain forms. Another is devise, a popular gem for handling user session management. (Both of these gems are maintained by the excellent developers at Plataformatec.)
The simple_form README says to run this command during installation:
The file config/initializers/simple_form.rb looks like this:
# Use this setup block to configure all options available in SimpleForm.
SimpleForm.setup do |config|
# Components used by the form builder to generate a complete input. You can remove
# any of them, change the order, or even add your own components to the stack.
# config.components = [ :placeholder, :label_input, :hint, :error ]
# Default tag used on hints.
# config.hint_tag = :span
# CSS class to add to all hint tags.
# config.hint_class = :hint
# CSS class used on errors.
# config.error_class = :error
and so on...
Also notice that you get a nice ERB template that extends the built-in Rails scaffold generator to automatically hook newly generated forms up to simple_form. And you get a default set of internationalization strings in config/locales.
Upgrading a gem that has configuration
The problem is that when you upgrade the simple_form gem from 1.5 to 2.0, the files that you have generated are stale. They might represent options that are no longer in the gem. Also, you might be missing out on new configurable features.
And the defaults might have changed, so any commented out lines showing the defaults will now be wrong. This recalls the programmer adage, “a comment is a lie waiting to happen.”
Well, there is an easy solution, of course. Just re-run the generator!
Assuming you’re using a source control system like Git, you can safely re-run the generator without breaking any of your code.
$ rails generate simple_form:install
SimpleForm 2 supports Twitter bootstrap. In case you want to generate bootstrap configuration, please re-run this generator passing --bootstrap as option.
Overwrite /Users/grant/code/blog_posts/rerun_generators/config/initializers/simple_form.rb? (enter "h" for help) [Ynaqdh]
Oops, looks like we have a conflict! Not to worry. I always just say yes to everything and move on. We will address the conflicts later.
Afterwards, all three of the generated files show up as having changes.
$ git status
# On branch master
# Changes not staged for commit:
# (use "git add <file>..." to update what will be committed)
# (use "git checkout -- <file>..." to discard changes in working directory)
# modified: config/initializers/simple_form.rb
# modified: config/locales/simple_form.en.yml
# modified: lib/templates/erb/scaffold/_form.html.erb
no changes added to commit (use "git add" and/or "git commit -a")
I can use command-line tools like git diff or GUIs like GitX to figure out what changed, and more important, make an educated decision about which changes I want to keep and which ones I want to change back to the old version.
Now let’s look at config/initializers/simple_form.rb.Check out this diff; it’s got quite a lot of changes!
Changes to the name of a setting
Let’s focus on one small change:
- # When false, do not use translations for labels, hints or placeholders.
- # config.translate = true
+ # When false, do not use translations for labels.
+ # config.translate_labels = true
This configuration option changed! I imagine that using the old version would give a deprecation warning when your application boots. Most people would simple change translate to translate_labels and move on, but in our version, we pick up the change to the comment as well, helping out future developers (and our future selves) to figure out what’s going on more quickly.
Changes to settings that are not on by default
Here’s another interesting example. Note that the comment didn’t change. It turns out that the simple_form authors are doing something subtle here.
# You can define the class to use on all labels. Default is nil.
- # config.label_class = nil
+ config.label_class = 'control-label'
They really want new users that follow the installation instructions to set a config.label_class. This would make it easier for those users to style the form labels generated by simple_form while not affecting other labels that they might be manually generating. But the authors also don’t want to force that decision on other gem users who upgrade blindly and don’t even realize the configuration file exists.
So they have set a reasonable backwards-compatible default for the past, and have pushed out a good new opinionated default for new users. This is a balanced and thoughtful approach. And now that we are sitting in front of this diff output, we get to make a conscious decision about which direction to take.
For my final example, I’ll ask you to spend some more time looking over the diff, which appears below.
Notice that several settings were bundled together and moved into a config.wrappers block. This change is pretty drastic, and I imagine there is some sort of backward-compatibility for users who use the old settings.
But by embracing the new block style, we learn that we can create additional groups of settings. The :default argument to config.wrappers implies that we can create additional named settings groups and mix and match where we use them in the application.
So by re-running the generator, we find ourselves making a lot of interesting choices about our application right at the time that we upgrade a gem. This is great because we still have full context about why we chose to upgrade. Woe to the developer that is working on a bug several months from now who doesn’t understand why a setting is set in some strange old way.
Flexible plans executed via iterative development are at the core of Agile:
Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
This is great for figuring out what to build, but all this flexibility can make planning and estimation hard. In practice, developers tend to prefer backlogs containing a few weeks worth of fine-grained stories following INVEST principles, followed by low-fidelity—and unestimated—chunks of epic-sized features. The thinking is that any stories farther out are unstable, and that it’s wasteful to spend time specifying them in detail. Agile planning tools like Pivotal Tracker are built with this perspective in mind, and are great for managing fine-grained details. But what happens when you need to get a more big-picture view of a project? Recently, a colleague said:
As development moves forward, features change. And those changes have implications on the stories later in the backlog or icebox. … Not sure if this the best way since it causes me to not want stories that extend beyond a few iterations.
Isn’t this the perfect distillation of the Agile Manifesto’s notion of “Responding to change over following a plan”? I find the problem isn’t changing stories—this is a natural part of Agile development. Rather, the difficulty is doing the work to 1) figure out which stories are stale, and 2) to re-estimate stale stories, lest 3) clients make plans based on stale estimates and then get upset when we say “sorry, those estimates aren’t accurate any more”. Ideally, the estimates will be revamped downwards (there’s less uncertainty now that we know more about what’s going on, right?), although sometimes we’re discovering hidden complexity and the estimates go up. D’oh!
The Assumptions Label Technique
One technique I’ve used successfully on a few projects is what I like to call the Bullpucky Assumptions Label. I pull it out when the client demands—not unreasonably, I might add—that we estimate out the next 3-12 months of work so that they can get funding / approval from their boss / etc. I’ve seen project teams fight this for weeks (the PM getting more irate and frustrated the whole time), finally lose, and schedule a (miserable) half- or one- or two-day mini-inception during which they proceed to estimate every story for the next few quarters in fine-grained detail. Of course, they inevitably have to re-estimate half those stories in angry IPMs when it becomes clear the estimates are wrong, grumbling “we told you these estimates were bullpuckey”.
Here’s the Assumptions Label technique:
Schedule a 2-3 hour Assumptions Meeting with the PM and 1 or 2 devs. (You don’t need the whole team; these aren’t real estimates). Estimate “stories” (they’re really closer to epics) at a multiples-of-8-point level of granularity. Pretend we’ve built the basic shopping-cart and inventory functionality of Hamazon (“The Internet’s Favorite Purveyors of Pork since 2009!”), and now the client wants to fully copy Amazon’s feature set. It might contain rough estimates like “Reviews and ratings? Mmmm…24 points. Recommendation Engine? 40 points.” Rough out the desired feature set. You’re basically estimating at a pair-week level of granularity, so multiply pair-weeks by (velocity/team strength) and you’ve got your pointed estimate.
Write titles in all caps (they’re easier to see that way). Don’t bother writing a description for the story. It’s ok to use multiple 8-pointers to get to the number you need.
Throw an “assumptions” label on all these stories; they’re easier to wrangle (and it never hurts to drive the point home).
The Assumptions Label technique in action. Use it to re-prioritize coarse-grained blocks of epic, and watch estimated completion dates adjust.
Now your PM can give a rough estimate to their boss or their boss’s boss, re-prioritize at a rough level of resolution, and cut scope or add pairs. But it remains clear to everyone that these should never be mistaken for actual, deliverable stories. In fact, these “assumption” stories become a decent way to see what’s next when story-writing. IPM or pre-IPM often becomes an exercise in picking the top assumption off the top of the file and fleshing it out into real stories. By reducing the difficulty in seeing what’s a real story and what’s a rough estimate for planning’s sake, everyone gets better visibility into the project. Pivots can set better expectations for their PMs, PMs can set proper expectations for their boss, and trust is preserved on the team.
While, using the SASS @import method, you don’t need to re-@import at all. All files @import’ed lower in the order already have access to all the variables and mixins defined in the files loaded above it in application.sass.
Some may not call this a “problem”, but aside from being annoying to do every time, there also a more serious problem; any file that contains renderable CSS (anything excluding variables, placeholders and mixin definitions) will be rendered every time it is @import’ed. This can quickly grow your final CSS output (I’ve seen some minified CSS > 1MB because of this error) and create a mass of duplicate selectors, putting undue load on the browser.
Using the @import global namespace creates a Whorfian effect where the developers on the team tend to define and reuse their variables where they should (in the variables files), and not in more specific files. For example, z-indexes can become a nightmare if not defined in a global variables file.
Compilation will speed up a bit in development, because it won’t have to re-compile all the vendor mixins every time each partial @import’s it.
SASS @import syntax is easier to read than the Sprockets CSS comments syntax, IMO.