To address some of the problems we experienced recently, we’re moving the Pivotal Tracker cache servers to dedicated hardware tomorrow (Wednesday) night, at 8pm Pacific Daylight Time. This move may require up to 15 minutes of downtime, but we expect improved stability as a result of the change. Please accept our apologies in advance for the inconvenience.
We are happy to be a sponsor for RockHealth’s Meetup of the Minds and we invite you to join us for a social mixer that brings brings together the best minds in technology and health care for a meetup over chips, salsa, and margaritas.
Monday, May 9, 2011
5:00 – 8:00 pm
130 Townsend Street
San Francisco, CA
Please RSVP via http://meetupoftheminds.eventbrite.com/
We look forward to seeing you.
Most continue to use Paperclip, though a few projects have
successfully used Carrierwave. Carrierwave apparently has better mongo support.
I tried to setup ruby and rails, and installed the latest
mysql gem and I get warning messages about the mysql gem.
Others have experienced this too and offered to help. The main answer
seemed to be “the warnings are lies, it will work.”
- Checkout Pry. It’s an irb replacement that includes tab
completion and cd and ls for navigating scopes.
Ask for Help
“Jonathan Berger asked if anyone has much experience with ShowOff?”
Davis was going to speak to Jonathan offline.
- Joe pointed out the syntax to push remotely was:
git push remote local:remote
This is also a great way to automatically track your guests replies. Guests can easily RSVP the event by clicking, “yes”, “no”, or “maybe” and hostesses have a clear head count for who’s going by checking the RSVP summary.
Its as simple as that.
We’ve had a number of brief outages and/or periods of degraded performance in the last few weeks. I’d like to shed some light on what caused these incidents and what we’re doing to prevent them in the future.
As you may know, one of Pivotal Tracker’s core features is that your view of your project is always up to date, there’s no need to refresh your browser page. If one member of the project pushes a start button on a story, for example, everyone else sees the change immediately. This is an important aspect of keeping the entire team focused and on the same page.
Under the covers, the way this is accomplished is via polling – the browser sends a request every few seconds, basically asking if there is something new. Given the large number of users out there, this translates to approximately 1000 requests per second.
Most of these requests don’t end up hitting any of our application servers, they go straight to a very fast in-memory cache (in the form of multiple memcached processes). Only requests that involve a “stale” response (meaning, there are some changes to return to the client) make their way to an application server. These represent a very small fraction of all requests.
This architecture works well, but the in-memory cache is a critical component, and if it goes down or has any problems, the 1000 requests per second end up hitting the app servers, which are not designed to handle that kind of load. The requests end up backing up, and it takes a few minutes for the system to recover even if the caches are brought back up quickly.
Some of the recent brief outages in the last few weeks involved the cache processes hitting a few different configuration-specified limits (related to connections and the virtualization layer). We also saw a similar issue with our load balancers, which route all of the traffic to the right places in the cluster.
In all cases, the problem was identified and resolved quickly, and Tracker was brought back to normal.
To reduce the likelihood of similar issues in the future, we’ve added more monitoring, and we’re making some changes to the environment, including additional layers of redundancy for the cache, and moving the cache processes from virtual hosts to dedicated bare metal machines. We’re also considering similar changes to other parts of the cluster, but taking it one step at a time to avoid introducing too many changes all at once.
We’re also considering moving away from the polling architecture, which requires a continuous high traffic rate, to a push approach, via HTML5 WebSockets. This would reduce the number of requests dramatically, but the HTML5 WebSockets protocol is still being finalized, and only some browsers support it natively (Chrome 4 and Safari 5 currently). One option that we’re thinking about is a hybrid approach – WebSockets push for browsers that support it, falling back to polling when push is not supported.
We apologize if you were inconvenienced by any of these brief outages – we certainly understand what it means to lose access to Tracker, even momentarily.
- New release of Sass (3.1). Includes the ability to create custom function, introduces an “each” and improves it’s colour functions.
This has been a common question recently, and so I’d like to take a few minutes and explain how the 60 day free trial works in Pivotal Tracker, clarify what your options are at the end of the trial, and also revisit what public projects are again.
When you first sign up for Tracker, we automatically make an account for you, which allows you to create projects and invite collaborators to them. As a Tracker user, you can own and share multiple accounts, and the first account that’s created when you sign up is put on a 60 day free trial. This free trial allows you to create as many projects as you’d like, and invite an unlimited number of collaborators to them.
As you get close to the end of your account’s free trial (or July 19, for those accounts that were created prior to Feb 19), you will receive a reminder email, and see a one-time popup when you sign in to Tracker.
At the end of the free period, your account automatically transitions to the free plan, which allows up to 5 private projects without collaborators (project members with read/write access), and 200MB of file attachments. If your account is under the free plan limits, then nothing changes – you can simply continue using your projects as before.
If you’re over the free plan limits, projects in the account will become read-only. All members of the projects in your account will continue to have access to their projects, indefinitely, and the projects can be exported to CSV files at any time.
Upgrading your account to a paid plan, based on the number of projects, number of collaborators, and/or file storage in use, will restore full access to all projects in the account immediately.
Archiving some of your projects, or removing collaborators and/or file attachments can also restore project read/write access.
You can also make your projects public. Public projects can be seen by anyone on the internet, including stories, comments, file attachments, and the names of all project members. Public projects, and activity in them, appear in the public projects directory and feed:
Public projects are completely free – they do not count towards any project or collaborator limits, regardless of which plan your account is on.
All projects are private by default, and only the users who have been invited by a project owner have access to them. A project owner can make a project public, in project settings:
Projects are never made public automatically – the only way a project can ever become publicly accessible is by the project owner explicitly making it public, on the project settings page.
If you have any question, please give us a shout by email to firstname.lastname@example.org.
There is a new kid on the block when it comes to file attachments for Rails and it is called Carrierwave.
Carrierwave gives you the ability to easily store attachments on S3 using another great gem called Fog.
Uploading files to S3 is great for many reasons but it can slow down your testing environment because it takes a while to send stuff up to S3. The Carrierwave documentation tells you how to switch the storage location over to file storage during testing but that wasn’t enough for me. I wanted to use the same storage mechanism for dev, test and production so I sought out a way to do so.
I had heard about Fog’s ability to mock itself to pretend that it was interacting with S3 so I decided to see if I could get it working with Carrierwave. This allowed me to use the same storage mechanism in test mode without slowing my tests down waiting for images to go to S3.
After a bunch of tinkering and a message on the Fog mailing list(thanks for the quick response Wesley), this is what I came up with:
The key is that you have to tell the mocked Fog that an S3 bucket exists before it will let Carrierwave put an image there. I wasn’t doing this at first and Carrierwave kept showing me a 404 error from Fog.
Drop this in a file in your spec/support and/or features/support
directories and you will have your tests thinking they are sending things to S3 without actually sending them to S3.
Now I don’t have to mess around with having a bunch of test images laying around my hard drive and I can make sure I’m using the same storage mechanism across all environments without slowing my tests down.
Your mileage may vary but I’d love to hear how this works for people and if there are any limitations. I haven’t found any yet.
Check out my guest post on the Engine Yard Blog for updated details:
“What’s the best admin gem out there?”
“Amazon Web Services is down”
This is one of the longest outages we’ve seen of Amazon and thus Heroku. So many Ruby applications rely on Heroku that the community is certainly affected. Updates on the damage are on Heroku and Amazon. You can see how far reaching problems on EC2 are on ec2disabled.com
“Obama is in town!”
There have been rumors he is visiting a few notable software companies in the area, hopefully he’ll stop by. Watch out for blocked streets for the motorcade.
“How to diagnose a prolonged environment load?”
Boulder folks in town are experiencing a problem on a project where the Rails environment takes ~5 minutes to boot. They’ve timed the
in their code and the delay seems to be after that. What’s a good way to approach this issue?
JPB passes along http://bit.ly/solarized-colors, a color scheme with some nice visual and ergonomic properties.
I’ve finally found a bit of time to update Cedar to work with Xcode 4, and I hope to have it working smoothly some time in the next few days. However, I’ve already come across my first significant issue with the Xcode 4 changes: the location of build products.
Not unexpectedly, the problem has to do with command line builds using
xcodebuild. By default, Xcode 4 now puts build products into a project-specific directory in the “Derived Data” folder; this looks something like
/Users/pivotal/Library/Developer/Xcode/DerivedData/Cedar-somegianthashstring/Build/Products/Debug-iphonesimulator/libCedar-StaticLib.a. This isn’t a problem, generally, because the
BUILD_DIR compiler variable contains the build directory, should you need to find this location during the build process.
Sadly, when you build from the command line, using the
xcodebuild command, the build products still go into the old Xcode 3 build location, but the
BUILD_DIR compiler variable contains the new Xcode 4 build directory. This means any script that looks for the build results in the directory specified by
BUILD_DIR won’t find anything.
The build target for Cedar’s static framework is simply a script that uses
xcodebuild to build the static library for both the simulator and the device, and then uses
lipo to make a fat binary from the results. Because it can’t find the build results at the location specified by
BUILD_DIR it now fails messily.
The easiest workaround I’ve found is to change where build products go using the Locations setting in the Xcode 4 preferences (details below). Unfortunately, this isn’t a project-specific setting, so you’ll have to change your preferences similarly to make it work. I haven’t found any problems with changing the location of the build products, but this does mean the Cedar static framework (as well as the related static frameworks for OCHamcrest and OCMock) won’t build with the default settings. Unsatisfying.
The longer term solution is for Apple to act on the bug I filed. We’ll see how that goes.
UPDATE: Thanks to Christian Niles for pointing out the SYMROOT environment variable in a pull request. Setting this for command line builds forces Xcode to use the specified location for all build products, and updates the BUILD_DIR compiler variable.
Steps for changing the build location in Xcode 4:
- Open Xcode preferences (Command-,)
- Select the “Locations” tab
- Change the “Build Location” drop down from “Place build products in derived data location” to “Place build products in locations specified by targets.”
Xtreme Labs is very excited to announce that we’ve partnered with RIM, Adobe and New Toronto Group to co-present a session by Brian Zubert, RIM Developer Relations, on the BlackBerry PlayBook.
This is a joint event with the Toronto BlackBerry Developer Community and Toronto Flex Group. It will be held on Thursday April 21st, 2011 at The YMCA of Greater Toronto, 20 Grosvenor St., Toronto, Ontario, Canada, in the Central Auditorium, Second Floor, starting at 6:30pm EST.