The Cloud Foundry engineering team is looking for a few great engineers to join us in building core Hadoop and Big Data related services that run on top of the open source platform that transforms how the world deploys and scales software.
You love working with big data and the Hadoop ecosystem. You love the idea of delivering Hadoop as a service that lets developers, operations teams and data scientists get access to complex distributed software in the time it takes them to get coffee.
You love shipping features, but know that well factored code is what lets you keep shipping features in the long run. Maybe you get antsy writing code without tests, or perhaps you’re a Hadoop systems administrator, operations staff member, or QA engineer interested in leveling-up your coding skills.
Either way, you judge your success by the success of your team; you are interested in learning about the intersection of Platform as a Service, Cloud Computing and distributed data systems.
Cloud Foundry is co-developed with Pivotal Labs, a leader in the field of agile software development. We are opinionated about how software should be built. We pair program, all-day every-day, because we know it delivers remarkable results. We work in small teams, and rotate between them frequently. We believe in working at a sustainable pace – you’ll typically code hard for 8 hours each day, but then you’re off work to relax, recharge, and refocus.
Cloud Foundry engineering is located in San Francisco, in our SOMA office near 4th and Howard St. To make sure you start your day energized, we provide a catered breakfast every weekday morning, and unlimited snacks/drinks are of course available all day. Our collaborative, open-plan office space is filled with talented, like-minded engineers who enjoy taking advantage of our weekly Tech Talks, playing ping pong, and hanging out with their co-workers.
Desired Skills/ Experience:
First-hand experience developing software that interacts with Hadoop ecosystem software. You understand how to develop software that reads and writes to HDFS, executes MapReduce code, or queries Hive and HBase.
First-hand experience administering deployments of Hadoop ecosystem software. You understand how to install, configure, start, monitor, and troubleshoot a cluster others within your company depend on every day.
Experience and/or interest in Test Driven Development (TDD) and agile methodologies
Ability to dive into a polyglot codebase and contribute while learning
BA/BS in Computer Science or related field, or equivalent experience
We’re seeking more amazing developers to add to our team. Apply by sending us resume and a cover letter.
We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, or any other characteristic protected by law.