The ultimate guide to hiring a web developer in 2021
If you want to stay competitive in 2021, you need a high quality website. Learn how to hire the best possible web developer for your business fast.
ETL stands for Extract, Transform and Load, which is the foundation of the data warehousing process. An ETL Expert is someone who understands the data requirements of an organization and helps create an efficient method of managing that data. This professional should be able to collect data from multiple sources, organize that data for use, and make sure that the data is clean and up-to-date with minimal downtime. An ETL Expert can also monitor, guarantee performance and optimize data delivery to facilitate best practices in a business environment.
Here's some projects that our expert ETL Experts made real:
An experienced ETL Expert makes these demanding projects look effortless, while also making sure that they are delivered with accuracy and care. With the right knowledge on hand ,they have the potential to make significant contributions by helping businesses expand their customer base by utilizing their exisiting resources effectively. We invite you to post your project in Freelancer.com and find an expert ETL Expert who will help your business reach new heights.
De 5,452 opiniones, los clientes califican nuestro ETL Experts 4.9 de un total de 5 estrellas.ETL stands for Extract, Transform and Load, which is the foundation of the data warehousing process. An ETL Expert is someone who understands the data requirements of an organization and helps create an efficient method of managing that data. This professional should be able to collect data from multiple sources, organize that data for use, and make sure that the data is clean and up-to-date with minimal downtime. An ETL Expert can also monitor, guarantee performance and optimize data delivery to facilitate best practices in a business environment.
Here's some projects that our expert ETL Experts made real:
An experienced ETL Expert makes these demanding projects look effortless, while also making sure that they are delivered with accuracy and care. With the right knowledge on hand ,they have the potential to make significant contributions by helping businesses expand their customer base by utilizing their exisiting resources effectively. We invite you to post your project in Freelancer.com and find an expert ETL Expert who will help your business reach new heights.
De 5,452 opiniones, los clientes califican nuestro ETL Experts 4.9 de un total de 5 estrellas.Estoy por arrancar una migración con Informatica PowerCenter y necesito apoyo directo en la construcción de los procesos ETL. El conjunto de datos con el que trabajaremos está en Oracle y lo extraeremos desde una base de datos SQL; el objetivo es una migración limpia, controlada y auditable hacia el nuevo esquema. Qué espero de ti • Diseñar y desarrollar mappings, sesiones y workflows en PowerCenter que cubran todo el flujo de extracción, transformación y carga. • Incluir manejo de errores, validaciones de calidad y registro detallado de eventos. • Documentar cada mapping en el repositorio y entregar un pequeño manual de despliegue/rollback. Aceptaremos el trabajo cuando: - Todos los procesos se ejecu...
I need an experienced Python engineer who works confidently with AWS Glue to build and manage a small suite of data-integration jobs for a Hyderabad-based project. The core of the work is to design and automate Glue ETL pipelines that pull data from our production databases, catalog it accurately, and transform it into analytics-ready tables. Here is what I expect from the engagement: • Develop, test, and deploy Glue ETL jobs in Python. • Populate and maintain the Glue Data Catalog so new tables are discoverable and properly version-tracked. • Implement efficient transformation logic that cleans, enriches, and partitions data for downstream reporting. • Optimise job performance and cost by selecting the right worker types, job parameters, and database connectio...
Job Title: Technical Specialist working US hours Reports To: Technical Services Manager Job Overview We are seeking a resourceful and proactive Technical Specialist to join our Technical Services Team. As part of a team that provides custom solutions for clients, including custom reports, scripts, integrations, and data migrations, you will play a key role in bridging the gap between clients' business needs and our development team’s technical expertise. This role requires strong analytical skills, initiative, and the ability to problem-solve independently. You will also work closely with developers, Customer Success Managers, and clients to deliver tailored solutions. Responsibilities • Data Migrations: Oversee data migration processes from legacy systems to our newest pr...
Overview We are looking for a Senior Data Engineer to join the team that owns our core data product, the platforms and infrastructure that support it, and the data services consumed by our customers. You will work closely with other engineers, product stakeholders, and customers to design, build, and operate production-grade data systems, focusing on reliable ingestion, transformation, and exposure of data through APIs and product integrations. As a senior member of the team, you will be expected to help set technical standards, improve data quality and operational reliability, and contribute to shared ownership of systems from design through to reliable operation in production environments. What you’ll do • Design, build, and maintain data ingestion pipelines • Develop and...
I’m standing up a series of production data pipelines and need an IT professional who can move comfortably between Python scripting, SQL optimisation, PySpark transformations and Airflow orchestration. The immediate focus is end-to-end pipeline build-out: designing clean ingestion logic, transforming data in Spark, writing efficient queries and scheduling everything through well-structured Airflow DAGs. If you can demonstrate hands-on experience across all four technologies - Python, SQL, PySpark and Airflow—and enjoy owning a pipeline from raw source to curated output, I’d like to work together. Deliverables I’m expecting: • A working set of PySpark jobs that handle ingestion, transformation and output staging • Airflow DAGs that schedule, monit...
- **Core Architecture:** Spring Cloud + Kafka + Hadoop + Python Automation,This project requires a certain level of technical expertise.
I run a data platform built on Spark and Python, and I need an experienced engineer who can sit with me in Pune three to four times a week (hardly 2 hours in a day) to keep it running smoothly. Most of the immediate work revolves around tracking down bugs in existing PySpark jobs, but the role naturally extends to writing fresh code when gaps appear, tightening our data-pipeline orchestration, and mapping end-to-end data lineage so every downstream consumer stays confident in their numbers. Typical day-to-day work you will tackle: • Debug production PySpark jobs and accompanying Python utilities • Refactor or rewrite modules where quick fixes will not suffice • Optimise and monitor pipeline schedules (Palantir Foundry) • Document lineage and hand off clear, repro...
I have an existing analytics initiative that now needs a dedicated Redshift-based warehouse. The core objective is to design and implement a robust schema in Amazon Redshift, then ingest data coming from three different sources—our operational SQL databases, a set of RESTful APIs, and periodic flat-file drops in CSV or JSON. Here is what I’m aiming for: • A well-structured Redshift warehouse (star or snowflake schema, whichever is most appropriate) built to scale and documented clearly. • Reliable, automated ingestion pipelines for each source type. For SQL we currently use PostgreSQL and MySQL; for APIs the payloads are mostly JSON; the flat files live in S3. • Transformations that standardise data types, handle slowly changing dimensions, and enforce dat...
I’m standing up a new analytics environment in Azure and I need an expert who can take full ownership of three intertwined streams of work: • Data migration – Build end-to-end pipelines in Azure Data Factory that handle extraction, transformation, and loading from our on-prem sources into an Azure Data Lake. Azure Functions or Logic Apps can be used where they add value for event-driven steps or lightweight transformations. • Security & RBAC – Design and apply role-based access controls across the whole solution so that data in the lake, ADF, and any supporting services stays locked to the right teams. Policy-driven best-practice is a must. • CI/CD – Set up a clean, automated release flow (Azure DevOps or GitHub Actions—your choice) tha...
I have several disparate sources holding our inventory information and I want it all pulled together into one clean, well-structured Microsoft SQL Server database file. The job is straightforward in principle: extract every piece of inventory data you can reach, reconcile duplicate SKUs or mismatched item descriptions, then load the final result into a single .sql or .bak file that I can restore through SQL Server Management Studio. Please preserve every existing field that carries operational value—quantities on hand, reorder thresholds, location codes, cost figures, and any audit timestamps—and normalise the schema where it clearly improves reporting speed or data integrity. If a field appears under different names in different systems, map it consistently; if you uncover ga...
Project Title: Hiring Experts for IT and Digital Transformation Roles Project Description: We are seeking highly qualified professionals to join our team for a comprehensive digital transformation initiative. The roles require expertise in IT, digital infrastructure, social protection, and legal frameworks. Below are the positions and their respective qualifications: 1. Senior Project Manager - PMP or PRINCE2 certification - 10 years of experience managing complex projects in IT/e-government 2. Principal IT Architect - Degree in Computer Engineering or equivalent, TOGAF certification - 10 years of experience as an enterprise architect with expertise in API, ETL, cloud technologies 3. Business Architects/Analysts - Degree or Master’s in Computer Security, Data Science, or Computer...
Help wanted: daily/multi-daily comparison of supplier prices and stock levels (B2B webshop) Text: We operate a B2B webshop where business customers can place orders or commission items on request. Most of the goods are sourced directly from manufacturers. For most suppliers we have access to their stock levels and current prices; for some, no login is required, while others require login credentials. We are looking for a solution or a skilled professional who can help us retrieve supplier prices and stock levels daily — ideally multiple times per day — and compare them with our internal purchase prices so we stay up to date. No automatic syncing with our system or automatic price changes are required. It is sufficient if discrepancies between supplier prices and our system pu...
I need a reusable ETL framework built inside Databricks notebooks, version-controlled in Bitbucket and promoted automatically through a Bitbucket Pipeline. All source data arrives via GraphQL APIs, so the job includes handling authentication, pagination, and schema inference before landing raw payloads in Delta tables. A dedicated cleaning stage must then standardise and validate the data before it moves on to the curated layer. The structure should be modular—ideally a bronze/silver/gold notebook hierarchy—so I can slot in new sources or extra transformations without touching the core logic. I also want a lightweight Python package (wheel) that wraps the GraphQL connector and can be attached to any cluster. Acceptance criteria • Parameter-driven notebooks organised by...
Job Title: NestJS Backend Developer – High-Performance Car Bulk Import (ETL) The Challenge We are looking for a senior-level NestJS developer to build a robust, production-ready car data import engine for our vehicle marketplace. This isn't a simple "upload and save" task; we require a sophisticated streaming pipeline capable of processing massive datasets (CSV, XML, JSON) with minimal memory footprint and high reliability. Core Task Build a POST /imports/cars endpoint that: Automatic Format Detection: Handles multipart/form-data and identifies the file type (CSV, XML, or JSON) programmatically. Stream-Based Processing: Processes data using Node.js Streams / AsyncIterables. The application should never load the full file into RAM. Data Pipeline: Implements a clean ...
- Designed automated ETL routines to standardize disparate source formats, reducing manual reconciliation time by 40%. Performed root‑cause analysis and cohort studies to identify process bottlenecks and cost drivers. Built enterprise dashboards with row‑level security, incremental refresh, and performance tuning; improved executive visibility into KPIs. Implemented and supported ERP reporting modules, mapped master data across modules, and led data migration and validation during upgrades. - Established validation rules, lineage documentation, and data quality KPIs to maintain trust in analytics. - Replaced manual spreadsheets with parameterized Power BI reports and scheduled dataflows, saving recurring effort and reducing errors. - Performance Metrics: Delivered dashboards that improved...
If you want to stay competitive in 2021, you need a high quality website. Learn how to hire the best possible web developer for your business fast.
Learn how to find and work with a top-rated Google Chrome Developer for your project today!
Learn how to find and work with a skilled Geolocation Developer for your project. Tips and tricks to ensure successful collaboration.