No recruiters or agencies please.
Location: EU – working remotely.
Due to continued business growth this is a fantastic opportunity for a talented DevOps Engineer with hands on experience with Apache Kafka to join our team. You will be part of a fast-paced, innovative startup working with a highly visionary team looking after critical systems leveraging best of breed technology across a variety of infrastructures.
Our business involves managing distributed data platforms architected to store large volumes of data, be highly available and deployed across distributed infrastructures. We provide expertise in technologies such as Apache Kafka, Apache Cassandra, Apache Spark and a variety of complementary technologies. Our approach to engineering is to automate and instrument all aspects of deploying and managing these systems to provide a 24×7 always on deployment.
This role focuses on managing our client’s distributed data platforms by providing high quality services and keeping the lights on. The engineer will be working with a variety of customers from social media platforms to banking systems. The role will require on-call work on a rota basis to help manage customer’s environments and support internal systems.
This role is remote based in the EU and involves working with a DevOps team that is distributed across Europe. The candidate must be comfortable working remotely, and communicating over instant messenger, slack etc.. and video communications.
Apache Kafka – In-depth and expert operational knowledge of Apache Kafka.You need to have looked after Kafka in a production setting and know how to architect, manage and deploy it.
Excited and passionate about learning and sharing new technologies.
Knowledge and experience (2+ years) with running Linux infrastructure..
Good understanding of networking technologies including TCP/IP fundamentals, load balancing technologies and principles, DNS, DHCP, routing etc.
Solid understanding of security and complementary technologies e.g. VPN, LDAP, SSL etc…
Hands-on experience with the following technologies (or similar) including implementation and configuration:
- Configuration management Technologies such as Puppet, Chef but preferably Ansible (we use Ansible)
- Monitoring and alerting systems such as Nagios, Prometheus, Grafana, Datadog or similar
- Strong scripting skills – BASH, Python, Ruby, or Perl
Exposure to AWS, GCP etc..
HashiCorp stack – Consul, Terraform, Vault etc..
Containerisation and Virtualisation technologies such as Kubernetes, Docker, KVM, VMWare