Canonical is building a comprehensive automation suite to provide multi-cloud and on-premise data solutions for the enterprise. The data platform team is a collaborative team that develops managed solutions for a full range of data stores and data technologies, spanning from big data, through NoSQL, cache-layer capabilities, and analytics; all the way to structured SQL engines (similar to Amazon RDS approach).
We are facing the interesting problem of fault-tolerant mission-critical distributed systems and intend to deliver the world’s best automation solution for delivering managed data platforms.
We are looking for candidates from junior to senior level with interests, experience, and willingness to learn about Big Data technologies, such as distributed event stores (Kafka) and parallel computing frameworks (Spark). Engineers who thrive at Canonical are mindful of open-source community dynamics and equally aware of the needs of large, innovative organizations.
Location: This is a Globally remote role
Responsibilities
The data platform team is responsible for the automation of data platform operations, with the mission of managing and integrating Big Data platforms at scale. This includes ensuring fault-tolerant replication, TLS, installation, backups, and much more; but also provides domain-specific expertise on the actual data system to other teams within Canonical.
This role is focused on the creation and automation of infrastructure features of data platforms, not analyzing and/or processing the data in them.
Collaborate proactively with a distributed team
Write high-quality, idiomatic Python code to create new features
Debug issues and interact with upstream communities publicly
Work with helpful and talented engineers, including experts in many fields
Discuss ideas and collaborate on finding good solutions
Work from home with global travel for 2 to 4 weeks per year for internal and external events
Requirements
Have a Bachelor’s Degree or equivalent in Computer Science, STEM, or a similar degree
Proven hands-on experience in software development using Python
Proven hands-on experience in distributed systems, such as Kafka and Spark
Willingness to travel up to 4 times a year for internal events
Additional skills
You might also bring a subset of experience from the following that can help Data Platform achieve its challenging goals and determine the level we will consider you for:
Experience operating and managing other data platform technologies, such as SQL (MySQL, PostgreSQL, Oracle, etc) and/or NoSQL (MongoDB, Redis, ElasticSearch, etc), similar to DBA-level expertise
Experience with Linux systems administration, package management, and infrastructure operations
Experience with the public cloud or a private cloud solution like OpenStack
Experience with operating Kubernetes clusters and a belief that it can be used for serious persistent data services
What we offer you?
Your base pay will depend on various factors, including your geographical location, level of experience, knowledge, and skills.
In addition to the benefits above, certain roles are also eligible for additional benefits and rewards, including annual bonuses and sales incentives based on revenueor utilization. Our compensation philosophy is to ensure equity across our global workforce.
In addition to a competitive base pay, we provide all team members with additional benefits, that reflect our values and ideals. Please note that additional benefits may apply depending on the work location, and, for more information on these, please ask your talent partner.
Fully remote working environment—we’ve been working remotely since 2004!
Personal learning and development budget of 2,000USD per annum
Annual compensation review
Recognition rewards
Annual holiday leave
Parental Leave
Employee Assistance Program
Opportunity to travel to new locations to meet colleagues twice a year
Priority Pass for travel and travel upgrades for long-haul company events