Data Engineer

  • Data Engineer
  • Permanent
  • London, UK

Who are Metapack?

We are a tech company that works with a lot of the world’s biggest ecommerce players to integrate them with over 470 carriers around the world to make delivery easy. We are a multi-tenant SaaS platform. We give them the platform to help consumers decide their delivery preference and track the parcel’s progress whilst also providing the retailer with intelligent smart decisions about how to send the parcel – all underpinned with lots of data. We work with well-known global retailers and major brands such as ASOS, Adidas, Burberry, John Lewis, Boohoo, eBay, and Zalando. In fact we work with so many retailers and carriers it’s highly likely that you’ve interacted with us at some point when ordering goods online!

In August 2018, we were acquired by Fortune 100’s 2nd fastest growing company, stamps.com. We have super ambitious and exciting plans all centered around our tech. Metapack will play a role in shipping around 600 million parcels in 2018 and with the wider stamps.com family the number rises to 2.5bn parcels. MetaPack has been growing at 40% year on year over the last 5 years and continues to grow at a rapid rate.

Our values: Innovation, Integrity, Collaboration and Passion

The way we work really is at the heart of Metapack, and our 4 core values are brought together to give a sense of our culture.

 

With Innovation and Integrity at our core, we have a flat and open culture where data & evidence, backed by honest and frank discussions, beats subjective opinion and hierarchy.  We Collaborate with energy and Passion on meeting the needs of our fantastic customers and partners.

Why would I want to be a Data Engineer at MetaPack? 

Data is key to the Metapack’s strategy. We work at scale, pace and with the latest architecture patterns and tech. We process thousands of events per second and our massive dataset keeps growing at a staggering pace. We keep improving our data platform and data engineering stack to accommodate growth, enable novel solutions and provide the best service to our customers.

What would I be doing?

  • Contributing to the design, build and operational management of our data lake and analytics solution on top of proven AWS data technologies like S3, Athena, Lambda, Kinesis, Glue
  • Using state of the art technologies like Airflow and Spark to process data and get our dataset just right
  • Developing frameworks and solutions that enable us to acquire, process, monitor and extract value from our massive dataset
  • Supporting the Data Analysts and Data Scientists with automation, tooling, data pipelines and data engineering expertise
  • Delivering highly reliable software and data pipelines using Software Engineering best practices like automation, version control, continuous integration/continuous delivery, testing, security, etc.
  • Define, implement and enforce automated data security and data governance best practices within the solutions designed

Who are you?

  • A Software Engineering background
  • Experience developing and supporting robust, automated and reliable data pipelines in Python and SQL
  • Experience with data processing frameworks like Pandas or Spark
  • Experience with streaming data processing
  • AWS, Azure or Google Cloud experience
  • Continuous integration/delivery environment experience with a passion for automation
  • Knowledge of a Data Orchestration solutions like Airflow, Oozie, Luigi or Talend
  • Knowledge of both relational and non-relational database design and internals
  • Knowledge of how to design distributed systems and the trade-offs involved
  • Experience with working with software engineering best practices for development, including source control systems, automated deployment pipelines like Jenkins and devops tools like Terraform

It would be great if you also could bring

  • Practical understanding of GDPR and other considerations regarding data security
  • Knowledge and direct experience of using business intelligence and analytics tools (Tableau, Looker, Power BI, etc.)
  • Production experience working with very large datasets
  • Experience with big data cloud technologies like EMR, Athena, Glue, Big Query, Dataproc, Dataflow.
  • Data Science/Machine Learning know-how
  • A desire to constantly challenge the norm
  • Willing to attend conferences, webinars and meet-ups and share the learning

What are the perks?

  • 25 days holiday, 10% bonus (paid quarterly), pension, enhanced maternity and paternity leave, group life insurance scheme, private medical healthcare
  • Discounted gym membership, cycle to work scheme, interest free season ticket loan
  • Breakfast, dinner, fresh fruit, snacks and drinks
  • Hardware budget to let you get what you need
  • Dynamic, open culture with lots of social activities