Nope. Got nothing.buy ultram onlinebuy diazepam online without prescriptionvalium online no prescription buy tramadol without prescription soma online no prescription provigil online pharmacy adipex online without prescription ativan online no prescription buy ambien without prescription buy klonopin online buy xanax online
For your entertainment. A vignette from my pre-pivotal days.
I wake up, bleary-eyed, and roll out of bed. Squinting I look at the time: 11:27. Perfect. I slide into a pair of jeans and make a pot of coffee. I drink it black; it pairs well with a nutritious breakfast of two heavily-toasted frosted strawberry pop tarts.
I clamber uphill to my office on the 6th floor of Campbell Hall eager to start my day. There will be much code to write.
Yesterday was rough. My advisor and I had to face the fact, yet again, that the results of our analysis were still off. She was correct of course: we still weren’t encabulating the diagraphical errors with sufficient bisectional amplitude.
Sigh. Another one of those “will I ever graduate” moments? Time is running out, I remind myself, I have to finish in two years if I want Arnold’s signature on my PhD.
But I have an idea. And I race uphill eager to write the triphase meta-gaussian process code that might.. just might!.. encabulate my analysis errors with enough amplitude.
My office-mate hasn’t made it in yet so I’m all alone in my office-cave. Perfect. Headphones on, I nestle into my mouldering chair and lean back into my near-horizontal ergonomic position of choice (less bending of the wrists, you see).
The iMac flickers to life as I bring up a terminal:
$> cd ~/code2 $> mate analysis.py
I skim through the familiar file deciding where to put the new encabulation method. I settle on line 3742, between the definitions of mq7_take2(data) and EE_Medium(data3, data).
But what to name this function? With barely a second thought:
def EE_3P_MG(data, metadata):
A quick copy/paste of EE_4Q_ML and it’s off to the races. I slip into a blessed state of flow, sliding globs of terse code around as I fly up and down analysis.py. I’m at home here. The variables are old friends (p2 and xj3 are particularly beloved – we’ve been through a lot together) and it’s always a fun challenge to remember how all the helper functions work.
I reach a commit point but keep going.
Now I just need to use the new EE_3P_MG code in the analysis. I skip past The Scary Bit, pretty sure that it won’t depend on the new code and run a quick find-and-replace across the rest of the file.
Almost done! Time to test the thing. Pointing it at the small dataset I step away for an hour and a half to get some coffee and futz about on my phone.
I return to discover that the code had bombed out 15 seconds into the analysis. “Oh right” I say to myself, “the small data set has insufficient permambulatory significance for a triphase encabulation”. Duh. Thankful for this short feedback loop, I decide to run against the full data set.
But first – shuddering as I remember The Incident from last year – I put the code into my SCM of choice with a helpful commit message:
From: email@example.com To: firstname.lastname@example.org Subject: EE_3P_MG moar better analysis.py Onsi <Attachment: analysis.py 689KB> <Attachment: runner.py 3KB>
Just to be sure (again – The Incident) I check my quota on the mail server:
Used: 987 MB of 1 GB
Should be good for a while now that I’ve unsubscribed from all those cat video feeds.
With that out of the way:
$> mkdir run_1837
Wait. No, crap.
$> rmdir run_1837 $> mkdir run_1836
And now, victoriously:
$> ./runner.py -ds=full -tk=427 -out=./run_1836 Analyzing... ============ Using dataset: full Will output to: /Users/onsi/code2/run_1836/out.pickle Reticulating splines... Retailoring dark matter halo trees: 21/182739 - 2s 46/182739 - 4s
…looks like it’ll take 5 hours or so. As this process usually burns my computer into the ground I decide it’s time for more coffee, some lunch, and a chance to fall asleep reading some papers.
I return, 5 hours later:
182720/182739 - 17912s 182742/182739 - 17914s Trees retailored Enhancing merger rates... done Encabulating errors... Traceback (most recent call last): IndexError: list assignment index out of range File "/Users/onsi/code2/analysis.py", line 3920, in EE_3P_MG xj3[k] = p2*a[i:j-17]*a2[i+1:j-15]
Hmmph. It’s going to be another long night.
One of the benefits of pairing that we emphasize at Pivotal is the learning environment that it provides. We find that one of the best ways to teach a new technology or codebase is to pair up novices with experts. It’s important to understand however that effective learning doesn’t happen automatically – even when pairing; it takes intentional thoughtful effort from both the teacher and the student (or master/apprentice if you prefer) to transform a mediocre learning experience into an effective one.
Two things can help here: (1) understanding what effective learning actually does, and – with that in mind – (2) uncovering and engaging the student’s preferred learning style.
Let’s start with the first.
What is effective learning?
There are two distinct, but related, things that happen when we comprehend a new domain of knowledge: we build an intuition for the domain, and we construct a comprehensive mental model of the domain. Both are crucial.
Intuition allows us to move seemingly effortlessly as we interact with the domain of knowledge. We don’t have to pause and think for very long when solving problems but, instead, have an intuitive sense of where we are and where we are going. We may not be able to articulate exactly why, but things just feel right or wrong as we work in the domain; there is a flow of rhythm and harmony and we know, deep in our bones, what consonance and dissonance sound like.
Mental models give us the words and mental frameworks to reason about the domain of knowledge rationally. With an accurate mental model we have a grasp of the formal rules governing the domain and can articulate precisely why certain things work and why other things don’t. We can reason about new, heretofore unseen areas of the domain and can analyze and critique existing artifacts within the domain.
To make effective progress in learning a domain is to hone an ever deeper intuition and an ever finer and more nuanced mental model. Both are necessary: intuition allows us to move quickly and effortlessly while mental models allow us to communicate effectively and reason through new classes of problems.
Let’s take a concrete example from the domain of introductory (as in, middle school!) algebra. Solve for x:
x + 3 = 7
A student who has spent some time working through similar exercises will likely have built up an intuition around problems of this class. She might immediately think “I should move the 3 to the other side” and does so, remembering that she needs to give the 3 a minus sign:
x = 7 – 3
“And that’s just 4!”
x = 4
With enough practice, the student may not spend any time at all thinking through these steps but will instinctively know to move the 3 over and slap on a minus sign.
Now, consider what this student might do given a slight variation on this problem:
3 = 7 – x
If her knowledge of this domain is limited to a handful of memorized rules and a narrowly scoped intuition she will likely struggle with questions like: “Can I move x around?” and “How do I get rid of the minus in front of x?”
What’s lacking here is the correct mental model – a grasping of the deeper truth about the structure of the domain.
In this case, what the student needs to understand is the notion of equality. She needs to understand that in solving these problems she isn’t following arbitrarily rules about “moving” things around and slapping on minus signs. Rather, she is modifying both sides of the equation in identical ways so as to maintain equality. In so doing she is able to isolate x at which point the entity on the other side of the = sign represents the value of x. To take our first algebra problem – the student isn’t moving the 3 from one side to the other. She is subtracting 3 from both sides, thereby maintaining equality, in order to isolate x.
Having this accurate mental model opens the doors to tackling new aspects of algebra (like the second algebra problem outlined above). This, combined with practice (slow and tentative at first!), will allow the student to expand her intuition to cover more varied classes of algebra problems.
Two learning styles
With the twin goals of intuition and mental models in mind, I’d like to present two different learning styles that I’ve seen come up time and again at Pivotal. Of course, this is a huge simplification but I believe that even this naive classification can prove fruitful – so, on with the broad brushstrokes!
Broadly speaking there are two kinds of learners: top-down learners and bottom-up learners.
Top-down learners (I am one of theses) are the kinds of people most comfortable with the academic setting: lectures, readings, and homework. They are students-at-heart and their preferred approach to learning a domain involves first receiving a formal rendition of a mental model (from a lecturer or textbook), followed by practice (homework) to internalize that mental model and – ultimately – build an intuition.
Top-down learners might approach a new programming language or API by reading a book, or studying the spec/documentation, before sitting down to work out some examples or hack on a side project. Top-down learners can be quite uncomfortable groping around blindly with an existing codebase, preferring, instead, to sit down and read – taking it all in – to arrive at the beginnings of a coherent mental model first.
By contrast, bottom-up learners are the apprentices of the world. They learn their craft best by sifting through examples, pattern matching and tinkering and tweaking along the way to understand how the system responds to their changes. They might tag along and watch a master craftsman then practice, practice, practice. Slowly but surely they build an intuition for how the domain works by learning from failures and successes.
Only once they’ve built this intuition will bottom-up learners turn to the question of building mental models. Often times this happens when someone asks them to articulate exactly how and why the system works the way it does.
Of course, both learning styles are valid. And both have complementary strengths and weaknesses. The top-down learner may furnish their minds with a precise mental model, but without much prolonged practice they’ll never manage to build that crucial intuition that enables them to work in the domain effortlessly. The bottom-up learner may have a strong working intuition, but unless they perform the reflective work necessary to articulate a mental model – they will struggle when faced with new situations or when asked to explain something.
Unsurprisingly, the two learning styles begin at opposite poles and meet halfway!
Pairing and Learning
Knowing the twin goals of effective learning, and understanding at least these two broad categories of learners, can be very helpful when pairing in a teacher/student master/apprentice setting.
What kind of learner is your pair? Often times, if I’m working with a top-down student, I’ll invite us to step away from the keyboard and head to the whiteboard where we can diagram out the mental model of the system we’re working on. If a top-down learner asks me a question, I try to express the underlying rules and quirks of the system at hand and then ask them to answer their own question.
If I’m working with a bottom-up apprentice, I find the best thing to do is to relinquish control and give them an opportunity to explore and fail/succeed. This takes patience (something I often lack!) but can be far more beneficial than a mini-lecture. Often times, the apprentice will ask me to drive. As I do, I attempt to articulate why my intuition is taking me from one place to the next and leading me to the solution. When a bottom-up learner asks me a question, I try to point them to an example in the code that provides the answer so that they can see the answer in context.
In either case, for effective learning to occur, I try to encourage both the construction of a mental model and the building up of an intuition:
To help students and apprentices build their own mental models there’s little that works better than the socratic method. Yes, I might try to draw out diagrams and present mini-lectures on how things work but, ultimately, asking a student a question that forces them to articulate why or how something works is one of the best way to encourage them to build a mental model. This also enables the teacher to poke holes in their mental model and set their understanding straight.
More: the best teachers can infer the errors in a student’s mental model simply by observing their problem solving process. Here, again, asking a question is better than providing the answer. Instead of “you should type this instead” it might be better to ask: “What are the reasons that didn’t work? How might we find a better way to do this? How will this line of code here interact with that object over there?”
The best way to build intuition, of course, is to encourage practice. The master must let the apprentice struggle and fail from time to time. This is where ping-pong test-driven development can be so beneficial: the master may write the test and allow the apprentice to struggle through the code (and vice-versa). This gives the apprentice a well-defined problem to work through. Sometimes when I do this I struggle to let go of the keyboard and let my pair take their time to work through the “pong” of my “ping”. The best solution for me? Stepping away for a few minutes to get some coffee!
All of this takes intentionality, patience, and practice. Even when pairing: learning well is hard and teaching well is hard. Both teacher and student, master and apprentice, must strive towards the twin goals of building deep intuition and articulating robust mental models.
It comes up on almost every consulting project I’ve worked on as a pivot, but this particular conversation was memorable. I was surreptitiously ushered into a room by the product manager (PM). Once inside, he shut the door:
So, I know you guys don’t do crunch time… but we really need to ship by Monday.
I empathized with him, knowing that the pressure to hit the looming (and somewhat arbitrary) deadline was coming from upper management; and that a terrible gnawing fear was settilng in, threatening to drive all his decision-making.
But we don’t do crunch time.
At Labs our culture revolves around a handful of axioms. Perhaps chief among these is sustainability. It is because we value sustainability – seeing the developer and PM as real-life human beings with real needs and limitations – that we place a high emphasis on pairing, TDD and data-driven product management.
Let me explain.
Because we pair 100% of the time, 40 hours a week, we are producing at close to peak efficiency during working hours. There isn’t much time spent on distractions and there isn’t much time wasted going down yak-hair-strewn rabbit holes. We get stuck less often and we keep each other honest striving to balance rapid forward progress with clean maintainable code.
Outside of our lunch hour and a daily 15-minute game of ping poing, we’re writing code and (ideally) delivering business value.
Why do this? Because we believe that putting in a solid day’s work – day in and day out – is a deeply satisfying and humanizing experience. We’ve also found that 8 hour pair-days, 5 days a week – week in and week out – is close to the limit of what’s sustainable. Tacking on an extra hour or four at the end of a long day just leads to diminishing returns; doing so regularly risks dehumanizing burnout. Squeezing much more out of developers just isn’t sustainable.
Because we test drive all our code, we are our future-selves’ best friends. Tests help us write cleaner, more maintainable (and therefore sustainable), code. Tests make future refactors easy and stress-free. Tests keep bugs from creeping in and, when written well, can obviate the need for much traditional QA.
Writing and maintaining a test suite might seem like a wasteful time-consuming effort in the short run. But it pays massive dividends in the long run – both in terms of development speed and developer sanity.
Because we pair and test drive all the time, a small team of pivots can sustainably maintain a relatively consistent level of productivity. We measure this productivity not in lines of code or number of tests or hours worked. Rather, we measure it in terms of delivered complexity.
We do this using Pivotal Tracker. Tracker allows us to annotate stories (units of feature work) with points. These points are a proxy for complexity (you could also say difficulty) and are determined not by a rigorous formula but, rather, by the gut-sense of the team working on the project. As stories are completed and delivered by developers, and subsequently accepted by the PM, Tracker is able to compute a running average of the number of points completed per week. We call this running average the project’s velocity.
Velocity, then, is a measure of the rate at which the team, in its current configuration, can deliver code complexity. The point is not to compare one team’s velocity to a different team’s velocity — different teams may converge on different notions of complexity. Rather, the primary goal of all this data is to do one simple thing sustainably: to give the PM predictive power.
With this predictive power Tracker allos the PM to see, given a project’s velocity, which features will land within a given time frame. Understanding this is crucial to our process.
Putting it all together
The PM who asked my team to enter “crunch time” was, effectively, asking my team to increase the amount of complexity we could deliver in a given period of time. Complexity delivered is proportional to velocity multiplied by time, so he saw a few options for doing this:
First, we could try to increase velocity by “working harder”. But, because we pair all the time we are already working at close-to-peak efficiency. We might be able to get 5% more by “rallying the troops”, so to speak, but generating a dramatic increase in velocity by fiat will simply isn’t possible.
Alternatively, we could try to increase velocity by accumulating technical debt (“stop writing those silly tests!”, “just throw the feature together!”). But because sustainability is a core principle, we are loathe to do this. Corners cut today will have to be pasted back in tomorrow. Doing so manifests as an artificially high velocity today at the cost of a depressed velocity tomorrow. This is terrible as it diminishes the predictive power that our velocity provides our PM.
Since complexity delivered is the product of velocity and time, we could try another approach: we could increase the amount of time we spend working. Again, we are loathe to do this precisely because sustainability is a core principle. Pulling extra hours leads to an unsustainable culture whose zenith is burnout.
Consistenly, Pivotal’s answer to this problem is to clarify the PM’s role. The PM has many knobs at their disposal but velocity is not one of them.
Instead, given a velocity, the PM is tasked with reordering or simplifying stories (features) to ensure that the stories they want arrive within a given timeframe.
The beauty of this approach is that it brings data to the table. Instead of flying blind and generating requirements that must be accomplished in fancifully short periods of time, the PM is empowered – by data – to make informed decisions about what is feasible within a given timeframe. Even better: the PM can take this data back to his or her superiors and help them reason rationally about the progress of the project.
In short: velocity allows PMs and product owners to make decisions grounded in reality. No suprises. No crunch time. Just a sustainable working environment for everyone involved.
Oh the hubris!
Confession is good for the soul.
The vision I’ve articulated here is, at best, a Platonic (i.e. idealized) description of how the Pivotal process is meant to work. Reality is, of course, messier. Pairs go down rabbit-holes less frequently than solo-developers – but they still go down rabbit holes. Test suites can vastly improve code maintainability – but writing good test suites takes skill and a poorly written suite becomes a maintanability problem of its own. Velocity is a reasonable predictor of future productivity – but it is most decidedly not a precise science. And while all this sounds so reasonable on paper – communication is one of the hardest things we humans try to do and, invariably, the supposedly clarifying properties of data must first make it through layer upon layer of misunderstanding.
Nevertheless, we aim for this ideal world – striving to improve our process and our selves as we do so. Because the best way to handle crunch time… is to avoid crunch time.
Julie Ann Horvath’s public departure from GitHub has, once again, brought the question of gender equality in the tech world – and the San Francisco tech world, in particular – to the forefront of the blogosphere’s fickle zeitgeist.
Sadly, important and difficult conversations rarely take hold amidst the ebb and flow of the trending and the interesting and – all-too-often – the blogosphere moves on, having merely scratched the surface of a topic with deep systemic roots.
This is my entry, on this topic, into that public record of dubious value. I strive to articulate this post with humility – fully aware (at least, as aware as I can be) of my place in the conversation. You see, I have great privilege: as a a male software engineer I am part of the majority and have not experienced the abuse and trial that women working in this field have endured. What could I possibly add to the conversation? What gives me the right to chime in? And why should anyone care about what I might have to say?
But chime in, I will. And with two simple points that, I am hopeful, could help deepen this important conversation by giving other male engineers – members of the majority – a framework with which to approach this issue.
Point #1: Don’t assume. Ask. Then, listen.
A few years ago the topic of gender equality came up on a Pivotal Labs internal mailing list. This does not happen very often and I was curious to see how things would play out.
One well-meaning (male) engineer sent out a post lamenting what he perceived to be a male-centered engineering culture. One of his complaints? “We have a beer fridge.” Amused, I brought my wife (a physicist, not an engineer) over and pointed the offending sentence out. Her reaction? “Wait. What? I love beer!” (Brown ales, to be precise.)
What happened here? Simple. That engineer thought, as we fallible humans so often do, that he understood. In particular, he thought he understood the contours of the systemic problems facing women in our field. Ironically, his outline of the problem only proves to illustrate a deeper problem: too often, we (male engineers) reason about this issue with naive assumptions that betray facile stereotypes (e.g. women don’t like beer, men do).
How to fix this?
Don’t assume you understand the problems and concerns that women face in this field.
Instead of making assumptions, welcome women into the conversation and ask them to describe the problems and concerns themselves!
And when you ask, drop everything and listen.
Point #2: Do not minimize or invalidate feelings and experiences.
But what does good and effective listening look like? There’s a lot to say here but I want to hone in on one specific issue that I’ve seen come up time and time again. Naturally, this issue is quite general and extends beyond the question of gender discrimination in the tech industry.
A common way that we fallible humans respond to feedback – particularly negative feedback – is to minimize or invalidate the feelings and experiences of the person providing the feedback. This is grossly dishonest and harmful, as it communicates to the person providing feedback that their experience of reality is incorrect and, therefore, invalid. This breaks trust, shuts down conversation, and turns a learning and growing opportunity into yet another example of abuse.
To make this less abstract, I’d like to take a specific example coming out of Horvath’s experience. In her communication with TechCrunch Horvath mentions an incident that broke the proverbial camel’s back. In her own words:
Two women, one of whom I work with and adore, and a friend of hers were hula hooping to some music. I didn’t have a problem with this. What I did have a problem with is the line of men sitting on one bench facing the hoopers and gawking at them. It looked like something out of a strip club. When I brought this up to male coworkers, they didn’t see a problem with it. But for me it felt unsafe and to be honest, really embarrassing. That was the moment I decided to finally leave GitHub.
By way of example, let’s imagine some possible responses of Horvath’s male colleagues to her feedback. My intent is not to single out or put words into the mouths of Horvath’s coworkers, but rather to use this concrete example as a springboard for unpacking the abstract point I’m trying to make.
Imagine a coworker responding with:
“We weren’t gawking at you. You shouldn’t feel the way you’re feeling.”
Such a response invalidates the feelings of the one providing feedback. It has things precisely backwards: when you are told that you have hurt someone you must understand that the person has been hurt. Whether or not you think they should feel hurt or not is irrelevant; you must understand that they were hurt and that they were hurt because of your actions. Yes, even though your actions may not have been directed at them.
Here’s another possible response:
“I did not intend to hurt you. Get over it.”
Whether it was your intent to hurt or not is irrelevant. Yes, of course, intentionally hurting someone is worse than unintentionally hurting someone. But you must understand that the experience of hurt is valid and real regardless of intent. To attempt to minimize or deflect the hurt because it was unintended is to evade responsibility and is unfair to the one who has been hurt.
And one more possible response:
“It’s not a big deal. Why don’t you just forget about it?”
Such a response minimizes the experience of the one providing feedback. It effectively says: your reading of reality is invalid – you’re making too much of a small thing; chill out. This is deeply hurtful! Giving negative feedback is hard – overcoming that challenge and speaking up takes courage. Clearly this is a big deal that should not be swept under the rug.
Here’s a much better response. One that takes the feedback seriously and opens up the conversation for future growth and healing:
“I understand that you were hurt by my actions and I now see that my actions were inappropriate. I apologize and will change my behavior. Thank you for having the courage to share this feedback; is there any other feedback you can offer me?”
Gender discrimation in the tech industry is a complex issue that must be approached with humility and honesty; with a heart bent on understanding, empathizing, and – ultimately – transforming behavior and culture. These two points are just small stepping stones towards a better framing of the conversation: Don’t assume. Ask. Then, listen. And when you listen: don’t minimize or invalidate feelings and experiences. Instead, listen well.