Chevron
Data Engineer
Entry LevelOn-siteFull-time
Location
Buenos Aires, Buenos Aires F.D., Argentina
Salary
Not listed
Experience
4+ years
Posted
1 day ago
Job Description
Data Engineer
Location: Buenos Aires, Buenos Aires, Argentina
Total Number of Openings
1
Chevron Global Business Services (GBS), located in Buenos Aires (Puerto Madero), Argentina, is accepting online applications for the position of Integration Specialist. Successful candidates will join the IT Company which is part of a successful multifunction service center with a workforce of more than 1800 employees delivering business services and solutions across the globe.
A Data Engineer utilizes software engineering principles to deploy and maintain fully automated data transformation pipelines that combine a large variety of storage and computation technologies to handle a distribution of data types and volumes in support of data architecture design. A Data Engineer designs data products and data pipelines that are resilient to change, modular, flexible, scalable, reusable, and cost effective.
The Data Engineer is a key contributor within Chevron IT’s Leading Performance Team, responsible for designing, building, and maintaining robust data pipelines and analytical data products that enable cost transparency, performance benchmarking, supplier governance, and enterprise decision‑making. This role partners closely with Business Performance, Finance, Portfolio Management, and Vendor Management teams to deliver trusted, scalable, and well‑governed data solutions.
Key Responsibilities:
Design, develop, and maintain data pipelines and ETL processes using Microsoft Azure services (e.g., Azure Data Factory, Azure Synapse, Azure Databricks, Azure Fabric).
Utilize Azure data storage accounts for organizing and maintaining data pipeline outputs. (e.g., Azure Data Lake Storage Gen 2 & Azure Blob storage).
Collaborate with data scientists, data analysts, data architects and other stakeholders to understand data requirements and deliver high-quality data solutions.
Optimize data pipelines in the Azure environment for performance, scalability, and reliability.
Ensure data quality and integrity through data validation techniques and frameworks.
Develop and maintain documentation for data processes, configurations, and best practices.
Monitor and troubleshoot data pipeline issues to ensure timely resolution.
Stay current with industry trends and emerging technologies to ensure our data solutions remain cutting-edge.
Manage the CI/CD process for deploying and maintaining data solutions.
Design, develop, and optimize scalable data pipelines to ingest, transform, and curate data from multiple enterprise sources (financial, portfolio, vendor, and operational systems).
Build and maintain analytical datasets and data models that support dashboards, executive reporting, and performance insights.
Ensure data reliability, performance, and scalability across cloud‑based data platforms and analytics environments.
Apply software engineering best practices (version control, automated testing, CI/CD) to data engineering solutions.
Enforce data quality, validation, and reconciliation controls to ensure accuracy and consistency across reports and data products.
Align data solutions with Chevron data architecture patterns, security standards, and governance requirements.
Partner with data architects and analytics teams to ensure reusable, standardized, and well‑documented data assets.
Collaborate with the Leading Performance team to enable cost optimization, supplier performance management, and benchmark‑driven insights through data.
Translate business and performance questions into technical data solutions and analytical structures.
Support ad‑hoc analysis and deep dives for senior leadership, ensuring clarity, traceability, and confidence in the underlying data.
Proactively identify opportunities to improve data pipelines, reporting efficiency, and automation.
Contribute to a feedback‑rich, collaborative, and improvement‑focused team culture.
Share knowledge and mentor less‑experienced team members as appropriate.
Required Qualifications
Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience) and able to demonstrate high proficiency in programming fundamentals.
At least 4 years of proven experience as a Data Engineer or similar role dealing with data and ETL processes.
Strong knowledge of Microsoft Azure services, including Azure Data Factory, Azure Synapse, Azure Databricks, Azure Blob Storage and Azure Data Lake Gen 2.
Experience utilizing SQL DML to query modern RDBMS in an efficient manner (e.g., SQL Server, PostgreSQL).
Strong understanding of Software Engineering principles and how they apply to Data Engineering (e.g., CI/CD, version control, testing).
Experience with big data technologies (e.g., Spark).
Proficiency with GitHub.
Strong problem-solving skills and attention to detail.
Excellent communication and collaboration skills.
Proven ability to work with large, complex datasets and deliver production‑grade data solutions.
Skilled proficiency enabling analytics and AI‑driven use cases through high‑quality, well‑governed data pipelines.
Fundamental proficiency in AI/advanced analytics concepts, including data preparation for model training, inference, and generative AI grounding.
Skilled proficiency applying data quality, validation, and governance standards to ensure trusted analytical and AI‑enabled insights.
Preferred Qualifications
Learning agility
Technical Leadership
Consulting and managing business needs
Strong experience in Python is preferred but experience in other languages such as Scala, Java, C#, etc is accepted.
Experience building spark applications utilizing PySpark.
Experience with file formats such as Parquet, Delta, Avro.
Experience efficiently querying API endpoints as a data source.
Understanding of the Azure environment and related services such as subscriptions, resource groups, etc.
Understanding of Git workflows in software development.
Using Azure DevOps pipeline and repositories to deploy and maintain solutions.
Understanding of Ansible and how to use it in Azure DevOps pipelines.
Familiarity with Power BI or similar enterprise visualization tools.
Prior experience working in a global, matrixed organization.
Behavioral & Leadership Expectations
Demonstrates strong analytical thinking and problem‑solving skills.
Communicates complex technical concepts clearly to non‑technical stakeholders.
Operates with a high degree of ownership, accountability, and attention to detail.
Actively seeks feedback, adapts to change, and continuously improves ways of working.
Relocation Options:
Relocation could be considered.
International Considerations:
Expatriate assignments will not be considered.
Chevron regrets that it is unable to sponsor employment Visas or consider individuals on time-limited Visa status for this position
Chevron participates in E-Verify in certain locations as required by law.