We’re starting a Tracker Users Group in New York, and the first meetup is on Jun 30, at 6:30pm, at the Pivotal Labs office on Chambers St. Click the link below to become a member of the group and RSVP. We hope to see you there!
by Russell Edens. He has a great take on why Erector is interesting, complete with code examples:
With erector [views] are first class plain old ruby objects. Why is this good? It gives you all the tools of inheritance and mixin’s for your views. That is cool. Especially for an application with multiple views of the same underlying models. You can refactor your views into base classes that derive and render the same data in different ways. This is object oriented design for views. Nice.
I’ve seen object oriented view code in other languages and it leads to some very powerful re-use that all OO programmers can understand. The most ambitious of these attemps was by an HR company …[that] created their own markup language that was object oriented. The nature of HR data is that it has very complicated rules regarding who can see what data and when. The OO design of the language allowed that to be abstracted to the base classes and a functional programmer simply focused on the problem at hand. They took it further, as all commercial enterprise applications do, and they allowed the customer to define new models and views. Those views were very easy to write with this advanced data access logic abstracted out. Their customers loved it. They wrote very advanced business applications on top of this abstraction.
Views as simple classes, methods, and objects in Ruby – perfect!
Erector Hello World:
class Hello < Erector::Widget def content html do head do title "Hello" end body do text "Hello, " b "world!" end end end end
For more see the Erector user guide.
A Scrum team found they were doing too much context-switching, so they applied a dash of Kanban. It’s an interesting example of Kanban principles in action.
We used the term Feature Flow to describe the goal of the team: to let features flow through the team without interruptions. Any feature that is in a state of waiting, or is simply taking more than a few days is analysed. It’s moved to done as quickly as possible by scrambling more team members. When we encounter features getting stuck, we don’t pick up more work, we try to find the root-cause of the stickiness and solve that. We increased the quality and capabilities of our build environment a few times for that very reason: to prevent future blockage in our flow of features.
When we introduced the ‘work-in-progress’ limit, we also temporarily stopped doing planning meetings, as our first target was getting the w.i.p. down to 8. The interesting side effect was, that we were working for a few weeks without the need for a planning session. So we stopped the fixed-date planning session and replaced it with an ad-hoc planning session whenever the ‘sprint-backlog’ was drying up. From our coarsely estimated product backlog our product owner introduced a couple of days worth of features each planning session. The great thing was that the priorities could change at the last moment, as long as the team hadn’t started working on a feature. As the Sprint planning meetings were always quite strenuous, the just-in-time one-hour planning sessions kept the teams energy at a constant.
A laundry list of stuff I’ve come across / been pointed to lately. What are your favorites (i.e. comments, please)?
Book: Scalable Internet Architectures by Theo Schlossnagle
Book: Release It! by Michael Nygard
Book: The Art of Capacity Planning by John Allspaw
Conference: Velocity 2009 in SJ – it’s only a month away.
Presentation: Operational Efficiency Hacks by John Allspaw
John Allspaw’s blog, “Kitchen Soap – Thoughts on Capactiy Planning and Web Operations”…(excerpt from latest post: “I can’t tell you how ripped I get when people say things like this: ‘cloud computing means getting rid of ops’ “)
WebOps Visualizations Flickr Group
Apis / tools:
Chef. This is step one, or as Allspaw puts it “if there’s only one thing you do, automated configuration and
deployment management should be it”. We run chef-solo in our cap deploy. Don’t miss Cooking with Chef 101.
God for process management.
Xray for ruby process inspection.
Elif is Perl File::ReadBackwards ported to ruby.
Book: Think Unix
Ask for Help
None from the pivots…
“Any good tips on testing the creation of symlinks from ruby?”
shell out and do an
ls -l then parse the string to see if it’s target is correct.
- RubyMine‘s changes -> repository pane only seems to show the current users changes (SVN) instead of all changes for the repository. Showing history of a file still shows all user changes.
- The JSON gem overrides active supports to_json method. So if you are using the built in rails JSON support and you install the gem be aware that there are differences between them.
- Word is this is better on Edge and the gem no longer conflicts with the built-in version
- We have started moving some of our servers to Engine Yards cloud solution, and while it is still in the early stages it is looking like a promising solution.
Ask for Help
“I have a dependent destroy that is taking 5sec+, What can I do to speed that up?”
- try dependent delete on the leaves of the chain if you don’t have any after destroy hooks there that need running.
- try cascade delete in your database (again as long as you don’t have any after destroys to worry about)
- try marking the element at the top of the tree as deleted (using acts as paranoid or similar), then run an offline process to look for those records and destroy them and their dependent objects.
- don’t worry about orphaned records and manually clean them up every once in a while (unrecommended but least development effort)
Ask for Help
“What are some good ways to merge large business location data sets?”
There was a bunch of input including the following:
- You should create a scoring of how close the matches are.
- Good admin merge tools are worth the effort to create.
- Normalizing the data prior to the merge (i.e. pass the addresses through the USPS API to turn [Av Ave Avenue] => Ave)
- Humans do this best, outsource or Mechanical Turk it.
The after_commit plugin allows you to hook events to after the transaction commits. This is really useful when kicking off threads that expect to have access to the data in the database. Note: using after_save can cause you to have a race condition if the other thread attempts access to the data before the original thread has a chance to commit the transaction.
If you are storing a marshaled object in the database, you should make that field a blob type, it is smaller to store and if you leave it as a text or varchar you can corrupt the binary data you are storing in there. If you don’t have a choice about field types you should at least base64 encode the marshaled data before storing it.
RE: “NewRelic Side Effects?” from 05/19/2009 Standup
- It seems that NewRelic was not the cause of the problem but helped in exacerbating the problem by holding the transaction open long enough to create a race condition that still shows up when the system is put under enough load. To fix our problem we moved the trigger that launches the background process from and after_save to an after_commit see plugin. We also re-added NewRelic.
Ask for Help
“Who is getting
svn: Server sent unexpected return value (403 Forbidden) in response to PROPFINDin SVN?”
Lots of people! This is a known bug when a SVN project has to traverse the repository using more than one user. Check out subversion: Issue 3242.
It seems to work with Mongrel, but not Passenger.
“Who is getting
svn: Server sent unexpected return value (403 Forbidden) in response to PROPFINDin SVN?”
NewRelic Side Effects? We have some background processes that get kicked off from after saves of some of our ActiveRecords. These processes use Active Resource to post back updates to these records. Unfortunately the transaction that creates the record initially doesn’t seem to have completed before the first post-backs attempt to update them. (race condition). After trying a bunch of stuff we randomly removed the NewRelic plugin and all of the sudden we had no more race conditions. Is the NewRelic plugin holding the DB transactions open too long?
MySQL index sample size: We discovered that MySQL’s
ANALYZE TABLEuses a fixed sample size from the index regardless of the size of the table and thus can have really incorrect cardinality and which could lead to it choosing the wrong index. Also
ANALYZE TABLEgets run often (opening new connections, first read of table) and due to the problem mentioned above can cause the system to switch from using the right index to the wrong index each time you look at it.
Ask for Help
“Does time spent in the C code show up in the ruby-prof output?”
Yes it does — There are two stats for method time: “Self” [time in method] – [time spent in called children]), “total” [total time in method]. The second should include time spent in the C code
It was also mentioned that it is much more useful to look at the ruby prof output as HTML or even better using KCachegrind (though it was mentioned that it can be a bit of a pain to get it installed see KCachegrind OS X 10.5.6
First, thanks to everyone who came – especially those who laughed at all the right spots. If I didn’t get to your question, I’m here through Thursday afternoon.
There were a couple of questions during the talk and lots after the talk about how to deal with remote pairs. Since it’s RailsConf and most folks are on MacOS, ScreenSharing.app came up.
Chad Woolley, King Remote Pivot, wrote up a great detailed discussion of his setup back in December. It should have answers to your tool & equipment questions.
The key element is the Full Screen mode in ScreenSharing.app. In Full Screen mode the remote Mac just becomes a terminal on the host machine – which means keystrokes like CMD-TAB, CMD-Space and a few others go over the wire instead of to your local box.
But Apple killed this feature as of 10.5.5 – but you can get it back! Follow the instructions at this post at MacWorld – use the second, more complex method – to hack on your ScreenSharing.app bundle to restore the awesomeness.
Once you’ve got the new app, replace the current ScreenSharing.app so you are always awesome:
sudo mv /System/Library/CoreServices/Screen Sharing.app/ /System/Library/CoreServices/Lame Screen Sharing.app/ sudo cp Awesome Screen Sharing.app/ /System/Library/CoreServices/Screen Sharing.app/
Then, run these two commands from Terminal:
defaults write com.apple.ScreenSharing ShowBonjourBrowser_Debug 1 'NSToolbar Configuration ControlToolbar' -dict-add 'TB Item Identifiers' '(Scale,Control,Share,Curtain,Capture,FullScreen,GetClipboard,SendClipboard,Quality)'
We keep a copy of this app around which we renamed to AwesomeScreenSharing.app, so we don’t lose the feature on subsequent Software Updates.
One last thing: Quicksilver doesn’t index into the /System directory by default, but you can change that as well:
- Go to QuickSilver preferences
- Go to Catalog (top right)
- Go to Custom (bottom left)
- Hit the plus (system bar) to add a new location
- Pick File & Folder scanner
- Navigate to /System/Library/CoreServices
Now you can launch ScreenSharing via QS. Enjoy!
- JQuery ajax post with no data: If you use JQuery to POST with no data element, it will work fine in development mode (with mongrel) but Nginx will will respond with a HTTP 411 “Length Required” response code, as JQuery is not adding a Content-length header. This solution is to add an empty data hash to the call. Anybody have a better solution? Post-standup-research: Several People have encountered this issue and spoken of filing it as a JQuery bug, but I could find no followup that it got filed or fixes.
Git merge strategy “ours” is destructive: When doing a git merge that causes a conflict, picking “ours” as the merge strategy causes git to choose our changes for ALL files, not just those with a conflict, i.e. it ignores the changes in the other branch. From git-merge(1): “ours”: This resolves any number of heads, but the result of the merge is always the current branch head. It is meant to be used to supersede old development history of side branches.
New Relic RPM GEM problem: We encountered a case where using the GEM version of RPM in a project caused no telemetry to get logged. This may be an interaction with Desert or Geminstaller (or the GEM may be broken but we consider that unlikely). After switching from the GEM to the plugin, everything worked fine.
Collectl is a system performance data gathering tool. From it’s website: “Unlike most monitoring tools that either focus on a small set of statistics, format their output in only one way, run either interatively or as a daemon but not both, collectl tries to do it all.” Kinda like sar+top and user friendly too.
MySQL Analyze Table – no substitute for Explain: From the MySQL 5.0 docs: “ANALYZE TABLE analyzes and stores the key distribution for a table. MySQL uses the stored key distribution to decide the order in which tables should be joined when you perform a join on something other than a constant. In addition, key distributions can be used when deciding which indexes to use for a specific table within a query.” Sounds great huh. However we found a case here where using FORCE INDEX to get a select statement to use the right index caused a 2 order of magnitude speed increase. Moral: There is No Substitute for Explain and a Brain.
On April 29, 2009 Pivotal Labs hosted the inaugural San Francisco Pivotal Tracker User’s Group. It was a great success! As an avid Pivotal Tracker user (and sometimes developer) for over 3 years I am very interested in making Tracker a better product and teaching others how to use Tracker to improve their organization.
Here are a few thoughts I took away from the meeting, and a few tips and tricks.
I Created a Project… Now What?
Once someone has created their first Project in Tracker, we don’t give you much guidance on what to do next. People need help getting past the “blank page problem:” faced with an empty project, it’s daunting to get started. Did you know you can create a demo project that’s filled with example Stories?
First, Login and click Create a Project
Next, click create a demo project
Check out the results!
What are These Different Things?
What is the Difference between a Story, Bug, Chore, and Release? When should I use one versus the other? Good questions! To get started, check out the Stories section of the FAQ.
Why Can’t I Move All My Stuff into the Current Panel?
This is one of our top questions. The answer is this: Tracker doesn’t think you can get it done. Or, more specifically, history has shown that your team completes the number of points per iteration indicated by your Velocity. If history shows that you get 7 points done per iteration, Tracker will move the top 7 points worth of stories into the Current panel. Again, more details are available the Velocity and Iterations section of the FAQ.
This might help: to minimize this confusion, you can stack your Current panel on top of your Backlog (future) panel, giving you one big list. Choose View => Include Current in Backlog
What’s in a Name?
At Pivotal, which is a large consulting practice first and a developer of Pivotal Tracker second, we have always refer to the tool as “Tracker.” But, at the user’s group we continually heard people refer to it as “Pivotal.” Interesting!
That’s it for now, but look for more Tracker (or Pivotal) tips and tricks in the future!
- Perform_caching is not respected by Rails.cache methods.
config.action_controller.perform_caching does not appear to affect the Rails.cache methods.
Rails 2.1 introduced a new custom caching mechanism:
result = Rails.cache.fetch('key') do # create and return item for this key end
It would be really nice if, for testing, you could set a configuration variable that would force a cache miss every time and always execute the block associated with the fetch. However it appears that, understandably, action_controller.perform_caching only affects the kinds of caching implemented by ActionController (page, action, fragment) and not this caching mechanism that is implemented by ActiveSupport::Cache::Store. Looking at the code, is appears there is no way to disable this caching mechanism other than supplying :force => true to the fetch call to force a cache miss.
- W3C refusing to serve DTDs to ‘misbehaving’ clients.
One of our applications that has not been deployed for a while is starting to see issues from IE users. After digging, we found this: W3C’s Excessive DTD Traffic and this w3.org DTD/xhtml1-strict.dtd blocks Windows IE users?. To summarize, w3.org got tired of getting slammed with non-cached DTD requests, and cut off misbehaving user agents (ones that do not cache the DTD). IE is one of these.
We have several things so far, none of which work:
Forcing xhtml DTD to be a SYSTEM dtd, and load the DTD files off the local server. This didn’t work – IE serve the pages as raw unrendered html (must
have thought it was XML?)
Use a PUBLIC doctype pointing to the DTD served on our server. Alas, this had the same problem as the SYSTEM doctype in #1 – raw unrendered HTML.
Not using XHTML (e.g. have as root tag). This didn’t work (haven’t
looked into why yet).
Ideas are welcome.
- Range#min, Range#max: Don’t use them. Use Range#first and Range#last. Min and max are not overridden by Range so fall back onto Enumerable which then converts the Range into an Array first!