About ShipHero

Hello. We are ShipHero (https://shiphero.com). We have built a software platform entrusted by hundreds of ecommerce companies, large and small to run their operations and we continue to grow. About US$5 billion of ecommerce orders are shipped a year via ShipHero. Our customers sell on Shopify, Amazon, Etsy, Ebay, WooCommerce, BigCommerce and many other platforms. We’re driven to help our customers grow their businesses by providing a platform that solves complex problems, and is engineered to be reliable and fast. We are obsessed with building great technology, that is beautiful, easy to use and is loved by our customers. Our culture also reflects our ethos and belief that by bringing passionate, talented and great people together - you can do great things.

Our team is fully remote, with most of our engineers currently spread over the Americas but have been building out teams in Europe as well. We communicate regularly using video chat and Slack, and put a strong emphasis on asynchronous work so people have large chunks of uninterrupted time to focus and do deep work.

Making sure you and the rest of the company are able to focus while being at work is really important to us. You can read our internal guide on how we communicate from our website: https://shiphero.com/careers/c...


About the role

  • Improve and extend ShipHero’s existing data infrastructure (built around Amazon Aurora, DocumentDB, Redis and Redshift).
  • Improve the performance of reports, listings and searches.
  • Work with ETL, data pipelines and streaming solutions.
  • Review features and requirements, design and implement solutions together with our data engineers, data scientists, developers and designers.
  • Collaborate with our DevOps team to identify and remediate current and future data infrastructure performance or reliability issues.

    About you:

    • You understand that great things are accomplished when teams work together
    • You’ve made a lot of mistakes, and most importantly, have learned from them.
    • You are very comfortable with Python.
    • You are experienced with SQL, data normalization and query optimizations.
    • Experience with Elasticsearch or other search engine technologies.
    • Experience with stream processing (Kafka, Kinesis, etc) and/or CDC workloads.
    • You have experience in planning, provisioning, scaling and maintaining reliable data processing systems in AWS or GCP.
    • You can write and run ETL jobs using Airflow or some other solution.
    • You are aware of the tradeoffs between different data storage and data processing solutions and can communicate these clearly to technical and non-technical colleagues.
    • You are always eager to learn more and love to try out new solutions on your own.
    • Ability to express to other stakeholders what’s important and what’s urgent, so it can be prioritized along with competing priorities.
    • Experience with any of the AWS data ecosystem is also appreciated
    • Competence in spoken and written English.

    Perks:

    • $2.500 so you can buy any equipment you need to be happy at your job
    • 20 days paid vacation + new year & Christmas
    • Conference days don't count against your vacation days, we want you to stay up-to-date
    • We will pay for courses & conferences, if you learn we all learn
    • Salary range is $72.000 - $120.000 / year depending on experience and location