top of page
All Jobs /

Data OPs Engineer

Data OPs Engineer

Monday, September 11, 2023

Contract

Remote Role, United States

Job

The Data Ops engineer will play a critical role in leading the design and execution phases of our Database Infrastructure project.
This person will work closely with cross-functional teams and stakeholders to ensure the successful delivery of a new data infrastructure.
Join us and make a substantial difference in a dynamic, forward-thinking environment.
 

We are looking for a Data Ops Engineer...

  • What do we mean by “Data Ops”: this is a person with DBA skills and is very strong on cloud native (Azure ideally) tooling around Data storage, access, scaling, and optimization.

  • Demonstrable experience horizontally scaling database architectures in a cloud environment.

  • A work history of solving hard optimization problems to improve distributed data processing environments' efficiency (increased performance and lowered cost).

  • Several years experience as an Application Software Engineer, Full Stack Developer, or similar role, developing and maintaining complex software applications.

  • Resourcefulness, self-organization and self-management, and a strong problem-solving mindset.

  • Strong leadership skills with the ability to contribute to and mentor other engineers.

  • Flexibility to work with a team operating in different time zones.

  • Excellent communication skills, with the ability to present projects and features to
    a diverse stakeholder audience.

  • Experience working in an Agile environment, with knowledge of scrum
    ceremonies, procedures, and tools (such as Atlassian Confluence and JIRA).

  • Demonstrated success as both a backend and frontend engineer.
     

Requirements

  • At least 10 years of experience managing mission critical production workloads on relational (SQLServer, MySQL and PostgreSQL) and non relational databases (MongoDB, Cassandra) plus other Cloud native databases in a large, complex environment.

  • Database administration, including configuration, implementation, data modeling, maintenance, redundancy/HA/DR, security, governance, troubleshooting/performance tuning, upgrades, database, data and server migrations (Flyway and Liquibase).

  • Stored procedures

  • Search and analytics engine: Elastic 

  • Data integration: Kafka and Debezium.

  • Data storage: Hadoop, AWS S3 or Azure Blob Storage

  • Data warehouse: Redshift or Snowflake

  • Orchestration: Apache Airflow

  • Azure Data Factory

  • Azure Stream Analytics & Event Hubs

  • At least 5 years experience using Python in complex data engineering tasks.

  • Python frameworks: PySpark

  • Python libraries such as Panda and SQLAlchemy

  • SQL migrations within a cloud (SQL databases to SQL managed instances and
    between clouds providers (Azure to AWS)

  • Infrastructure as code (IaC) principles and experience with tools such as
    Terraform (preferably) or Ansible

 

Nice to have 

  • Containerization and orchestration (Docker, Kubernetes)

  • Data flow automation system: Apache NiF

  • ELK stack.

  • Azure functions

  • Git and GitHub

  • CI/CD implementations for data systems, including Blue-Green deployments

  • Atlassian Tools: Jira, Confluence


???????Compensation

  • $70 - $90 / Hr

 

bottom of page