Job Description
Design, build, and maintain robust data services and infrastructure to support critical business applications. Work closely with developers, analysts, and business stakeholders to ensure scalable, secure, and high-performance data solutions that drive actionable insights.
Responsibilities
• Design, develop, and maintain scalable data pipelines and services across multiple platforms and applications.
• Collaborate with cross-functional teams to gather and analyze data requirements.
• Implement data integration, transformation, and migration processes to support business initiatives.
• Ensure data accuracy, consistency, security, and availability across platforms.
• Optimize data workflows for performance, reliability, and scalability.
• Develop technical documentation and provide support for production data environments.
• Contribute to the continuous improvement of data engineering standards, tools, and best practices.
Qualifications
• Bachelor’s degree in Computer Science or a related field; or 4 years of relevant experience in lieu of a degree.
• Minimum 3 years of hands-on experience in data engineering or a related field, required.
• Strong proficiency in Python, PySpark, SQL, Spark and Hadoop, Oracle, required.
• Experience with data architecture, ETL/ELT processes, and big data ecosystems.
• Solid understanding of data modeling, database structures, and data warehousing concepts.
• Familiarity with modern data practices such as machine learning integration, data science workflows, and business intelligence.
• Strong problem-solving skills and a collaborative mindset.
• Excellent communication and documentation skills.
• Experience with cloud platforms (e.g., AWS, Azure, GCP), preferred.
• Familiarity with CI/CD practices and DevOps in a data environment, preferred.
• Knowledge of data governance and compliance frameworks (e.g., GDPR, HIPAA), preferred.
• Exposure to Agile/Scrum methodologies, preferred.
Responsibilities
• Design, develop, and maintain scalable data pipelines and services across multiple platforms and applications.
• Collaborate with cross-functional teams to gather and analyze data requirements.
• Implement data integration, transformation, and migration processes to support business initiatives.
• Ensure data accuracy, consistency, security, and availability across platforms.
• Optimize data workflows for performance, reliability, and scalability.
• Develop technical documentation and provide support for production data environments.
• Contribute to the continuous improvement of data engineering standards, tools, and best practices.
Qualifications
• Bachelor’s degree in Computer Science or a related field; or 4 years of relevant experience in lieu of a degree.
• Minimum 3 years of hands-on experience in data engineering or a related field, required.
• Strong proficiency in Python, PySpark, SQL, Spark and Hadoop, Oracle, required.
• Experience with data architecture, ETL/ELT processes, and big data ecosystems.
• Solid understanding of data modeling, database structures, and data warehousing concepts.
• Familiarity with modern data practices such as machine learning integration, data science workflows, and business intelligence.
• Strong problem-solving skills and a collaborative mindset.
• Excellent communication and documentation skills.
• Experience with cloud platforms (e.g., AWS, Azure, GCP), preferred.
• Familiarity with CI/CD practices and DevOps in a data environment, preferred.
• Knowledge of data governance and compliance frameworks (e.g., GDPR, HIPAA), preferred.
• Exposure to Agile/Scrum methodologies, preferred.
Additional Details
Experience: 2-5 years