Big Data Engineer
Start date will be ASAP upon selection.
Remote will work. Duration is until the End of Year.
Los Angeles, CA
As a Data Engineer, you will lead the design, implementation, and successful delivery of large-scale, critical and complex data architecture, storage and pipelines
What you’ll be doing:
• Build large-scale distributed data processing systems, data lakes, and optimize for both computational and storage efficiency on cloud platforms like AWS.
• Design, implement and automate data pipelines sourcing data from internal and external systems, transforming the data for the optimal needs of various systems.
• Python scripting / SQL query / Data analysis.
• Design data schema and operate cloud-based data warehouses and SQL/NoSQL/temporal database systems.
• Write Extract-Transform-Load (ETL) jobs and Spark/Hadoop jobs.
• Own the design, development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions.
• Monitor and troubleshoot operational or data issues in the data pipelines.
• Drive architectural plans and implementation for future data storage, ETL, reporting, and analytic solutions.
• Influence your team’s technical and business strategy by making insightful contributions to team priorities and approach.
• Provide insightful code reviews, receive code reviews constructively and take ownership of outcomes (“you ship it, you own it”), working very efficiently and routinely deliver the right things in the front-end UI area.
• Building relationships with your customers, partner teams and the engineers on your team.
• Influence your team’s technical decisions by making insightful contributions to team priorities and approach.
• BS in Computer Science or related field
• Experience implementing big data processing technology: Hadoop, Apache Spark, etc.
• Coding proficiency in at least one modern programming language (Python, Ruby, Java, etc.).
• Experience writing and optimizing advanced SQL queries in a business environment with large-scale, complex datasets.
• Experience in cloud-first design, preferably Azure / AWS (VPC, Serverless databases and functions, dynamic autoscaling, container orchestration, etc.).
• Experience in data architecture, databases (e.g., MySQL, Oracle, PostgreSQL), SQL and DDD/ER/ORM design.
• Knowledge of software engineering practices & best practices for the software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations.
it was nice talking to you!
As discussed please do confirm me the rate of $60/hr. on C2C all-inclusive for your consultant Aakash for the Below ios position.