You’re getting an itch
How to quickly scratch that itch
You’re getting an itch
How to quickly scratch that itch
You have a vision of your system’s architecture in your head. How do you share that with your team?
Do Whiteboard Architecture. At Pivotal Labs, our teams do Agile Inceptions  to plan out a few months worth of work. Just after an inception is when you have a rough sense of what you’re going to be working on in the upcoming iterations. After planning, take some time to draw out the current architecture of your application and then discuss where the new work fits. Your requirements changed mid-iteration? Gather everyone around and draw a new diagram.
Often, somewhere in an architecture is a Big Ball of Mud which contains code you’ll be needing soon. Discuss how to improve your ball of mud. Maybe new packages are needed. Maybe some packages need to be split. Maybe your terminology needs updating. Your goal here is to create a plan, and not to lock-in decisions. Be agile. Value responding to change over following a plan.
Whiteboard diagrams are disposable, and that’s a good thing. Your drawings only represent your current understanding of the architecture, and the architecture will change. Use your eraser. Draw a new diagram later.
While drawing, it might be a good time to discuss some architecture principles. This will help you explain why you’re making architectural decisions, and is a great chance to spread your thought process and knowledge throughout the team. Share these, for example:
here are some resources:
At Pivotal Labs, our open workspace is surrounded by whiteboards. Every wall is either a whiteboard, a window, or a door. Every team can walk ten steps and be at a whiteboard. We like to use whiteboards for diagramming because they are a low-cost, low-effort way of collaborating on Software Architecture.
To be clear, I’m not advocating for Big Upfront Design. I’m just advocating that you should Have A Plan and then implement, iterate, and look for feedback from your design and team. Because Whiteboard Architecture is cheap and easy, you should be able to gather your team, draw a new diagram, and have everyone on the same page quickly.
As usual, significant architectural decisions should be delayed until the last responsible moment (like the choice of SQL vs NoSQL), but this should not stop you from knowing that data needs to be stored and retrieved.
What tricks do you and your team use to collaborate on Architecture? Please share!
Here at Pivotal Labs, the engineers practice Extreme Programming, which means our teams write automated tests, we do pair programming, and we continuously refactor.
Recently, some like-minded engineers at Pivotal NYC started to get together to discuss software design. Our topics so far have included agile architecture, emergent design, and other related topics. This month, we’re focusing on a topic that comes up often among Pivots. The SOLID principles of object-oriented programming and design. We started a series of discussions with the Single Responsibility Principle (SRP).
While working as Pivots, we often introduce SOLID to our clients and new co-workers. This month’s software design lunch was an opportunity for us to deeply discuss the Single Responsibility Principle (SRP) to better understand of how it can be used in our designs and how we can better teach it to others.
What is it?
A class or module should only have one reason to change.
Like any tool in our software design toolbox, we want to make sure that we’re using the right tool for the job. Our group brought up these design smells that can let you know that you might need to use SRP:
There are many other smells that might lead you toward Single Responsibility Principle, but these are the smells that we discussed.
Why should we, as engineers, care to avoid these smells? Why should we be concerned about reasons to change?
When talking about software design, we should be driven by a motivation to reduce the cost of developing software. Techniques like Refactoring and Design Patterns are great, especially in the context of reducing the cost of creating and maintaining software. The SOLID principles also fit in this context.
The Single Responsibility Principle can help to create a codebase that is:
With all of these benefits, we can apply SRP to reduce the cost of creating and maintaining software.
With any software design pattern or principle, there are things that we should take under consideration when introducing them to our teams:
Introducing SRP usually comes with a need to introduce other design patterns and principles to reduce these concerns. Pick up a Design Patterns book for some inspiration.
What are some ways to introduce SRP to your team?
Our teams find outside-in test-driven development to be a useful technique to create SRP classes. Introducing mocks and stubs into your testing practice can alert you to multiple reasons for change. Programming by wishful thinking is a useful technique, to ‘wish’ for a new class that has the exact responsibility you need, instead of putting the new behavior in an existing class.
Another useful technique for spreading SRP throughout your team is to have a team discussion about it. Demo an example and show off its benefits. This can make it easier for your team members to apply the pattern to other parts of the codebase. You can discuss tradeoffs and when it might be a good time to introduce SRP to a particularly gnarly part of your codebase. This helps to gauge how prepared your team to try new strategies for designing software.
describe "some instance" do context "when the the instance is in this state" do it "behaves like this" do end end end
For the last six months, I’ve been working with Python. My team has a pre-existing codebase with established testing tools. We’re using Python’s standard unittest library as well as Nosetests, which sets up test suites easily and has many useful plugins. While these are great tools, I’ve found myself missing ‘describe’, ‘context’, and ‘it’. The unittest library is a standard xUnit-style testing tool.
After plenty of experimentation, I’ve settled on this pattern:
from unittest import TestCase from mypackage import MyClass class TestWhenAnInstanceIsInThisState(TestCase): def setUp(self): self._my_instance = MyClass( setup='some-state', ) def test_it_behaves_like_this(self): # some assertions
I’ve explored other styles and frameworks, but this is where I’ve landed. I’d love to be told that there is a better way to do this style of BDD in Python. So, if you have opinions, please share. Are there other patterns from other languages that use xUnit style testing that help with these BDD style tests?
Note: I’ve been using pyhamcrest  for assertions, since I picked up Hamcrest recently while working a Java project. It is a great tool and I highly recommend it.
 PyHamcrest – https://pyhamcrest.readthedocs.org/en/latest/tutorial/
from Dave Goddard
Monit recently introduced a new type of service to check ; “check program” which will run a script each cycle (or specified number) and will end up being good or bad depending on the exit code. After we started using this, we noticed that the script was often marked as a zombie on the machine ; at first we blamed the script, but eventually discovered that this is expected behaviour by monit, and that monit is planning to fix it RSN (real soon now)
from Adam Milligan
If you have a polymorphic association, Rails will use the base class of the parent of the association (as defined by ActiveRecord) as the class name of the associated parent.
class Foo < AR::Base belongs_to :wibble, polymorphic: true end class Bar < AR::Base has_many :foo, as: :wibble end class Baz < SomeSubclassOfActiveRecordBase has_many :foo, as: :wibble end
The class of the wibble association when instantiated for Bar will be Bar.
The class of the wibble association when instantiated for Baz will be SomeSubclassOfActiveRecordBase, not Baz, unless SSOARB.abstract_class returns true.
Our team ran into an issue installing Ruby 1.9.3 on Lion today. When running…
$ rvm install ruby-1.9.3
… the installer fails with an error message including “checking whether the C compiler works… no” even though we had XCode and gcc installed.
After some reading on StackOverflow and Github I found this solution …
$ rvm install 1.9.3-p0 --with-gcc=clang
… which points an explanation on RVM’s issue tracker .
See our full command line history and error messages below:
$ rvm install ruby-1.9.3-p0 Installing Ruby from source to: /Users/foobar/.rvm/rubies/ruby-1.9.3-p0, this may take a while depending on your cpu(s)... ruby-1.9.3-p0 - #fetching ruby-1.9.3-p0 - #extracted to /Users/foobar/.rvm/src/ruby-1.9.3-p0 (already extracted) Fetching yaml-0.1.4.tar.gz to /Users/foobar/.rvm/archives Extracting yaml-0.1.4.tar.gz to /Users/foobar/.rvm/src Configuring yaml in /Users/foobar/.rvm/src/yaml-0.1.4. Compiling yaml in /Users/foobar/.rvm/src/yaml-0.1.4. Installing yaml to /Users/foobar/.rvm/usr ruby-1.9.3-p0 - #configuring ERROR: Error running ' ./configure -- prefix=/Users/foobar/.rvm/rubies/ruby-1.9.3-p0 --enable- shared --disable-install-doc --with-libyaml- dir=/Users/foobar/.rvm/usr ', please read /Users/foobar/.rvm/log/ruby-1.9.3-p0/configure.log ERROR: There has been an error while running configure. Halting the installation. $ cat /Users/foobar/.rvm/log/ruby-1.9.3-p0/configure.log [2012-01-13 10:31:49] ./configure -- prefix=/Users/foobar/.rvm/rubies/ruby-1.9.3-p0 --enable-shared --disable-install-doc --with-libyaml- dir=/Users/foobar/.rvm/usr configure: WARNING: unrecognized options: --with-libyaml-dir checking build system type... x86_64-apple-darwin11.2.0 checking host system type... x86_64-apple-darwin11.2.0 checking target system type... x86_64-apple-darwin11.2.0 checking whether the C compiler works... no configure: error: in `/Users/foobar/.rvm/src/ruby-1.9.3-p0': configure: error: C compiler cannot create executables See `config.log' for more details
Recently, our team releasing to a large set of users and needed to ensure that our application could meet the performance needs of the new users. Launch day was a month away. Months of steady Agile feature development needed to meet a healthy amount performance engineering.
We started with a few goals in mind. We wanted:
We brainstormed for ideas on what would reliably lead us in the right direction. Requests per second (and seconds per request) were useful data points that we were able to get from Apache Benchmark (ab). Using AB we could make a change, run the benchmark, and be sure that the change made a positive performance impact. One obvious way to get requests per second to increase is to use caching. Our team was hesitant to use caching for several reasons:
To track down specific bottlenecks we used Ruby Benchmark. We could perform an Outside-in performance analysis of specific actions, benchmarking deeper and deeper, until specific methods or database calls stood out as obvious problems.
A few things stood out as Big Wins for us:
In the end, we learned a few things about optimization.
These are the obvious, well-covered topics of performance enhancement. There’s a reason. They can take your application a long way.
Thanks to Evan Farrar for much of the wisdom that went into our performance optimization thought process.