SEEKING WORK | São Paulo, Brazil | Remote: Yes – available globally
I'm a Data Engineer with 3+ years of experience in large-scale data infrastructure, automation, and financial systems (Santander, Bradesco). I design and build high-performance, reliable, and automated data pipelines for mission-critical systems handling billions of dollars monthly.
I've worked on systems that move billions of dollars monthly, with a strong focus on performance, automation, and data reliability.
Projects:
Data Pipeline - Crawler + Shopify Integration
(Crawls e-commerce data and automates product publishing to Shopify.)
Low-Cost Elasticsearch Cluster Setup
(Docker-based Elasticsearch cluster with Ngrok tunneling, TLS, and node discovery - designed for development, testing, and MVP environments.)
Availability:
Open to freelance (part-time/full-time), short or long-term projects - remote globally.
Location: São Paulo, Brazil
Remote: Yes – open to Remote (Global), Remote (US), Remote (EU), or hybrid
Willing to relocate: Yes – US, Canada, Europe
Technologies: Python, SQL, PySpark, Apache Spark, Airflow, Hadoop, NiFi, Docker, OpenShift,
Azure DevOps, Argo CD, Git, Linux, Selenium, CI/CD, ETL, Data Lakes, REST APIs
Databases: PostgreSQL, MySQL, Redis, Elasticsearch, HBase
Data & Analytics: Pandas, NumPy, Scikit-learn, Matplotlib, Seaborn
Resume/CV: https://www.linkedin.com/in/gustavo-d-12627416a/
Email: gustavofortti@gmail.com
I'm a Data Engineer with 3+ years of experience in the financial sector, having worked on large-scale
data infrastructure and credit scoring systems at Santander and Bradesco. I’ve led projects involving
data pipelines processing billions of records using Spark and Python, supporting systems that moved
billions of dollars monthly. I also have strong experience with big data environments, distributed
systems, data manipulation, automation, and production-grade pipelines.
Open to opportunities in product-driven teams solving meaningful problems with data.
Available for freelance (part-time/full-time), contract, or permanent roles – remote or relocation.
I'm a Data Engineer with 3+ years of experience in large-scale data infrastructure, automation, and financial systems (Santander, Bradesco). I design and build high-performance, reliable, and automated data pipelines for mission-critical systems handling billions of dollars monthly.
Tech Stack: Languages: Python (Automation, Crawlers, APIs, Data Pipelines), SQL, PySpark Databases: PostgreSQL, MySQL, Redis, Elasticsearch, HBase Tools: Spark, Hadoop, Airflow, NiFi, Docker, OpenShift, Azure DevOps, ArgoCD, Git, Linux, Selenium Data & Analytics: Pandas, NumPy, Scikit-learn, Matplotlib, Seaborn
I've worked on systems that move billions of dollars monthly, with a strong focus on performance, automation, and data reliability.
Projects: Data Pipeline - Crawler + Shopify Integration (Crawls e-commerce data and automates product publishing to Shopify.)
Low-Cost Elasticsearch Cluster Setup (Docker-based Elasticsearch cluster with Ngrok tunneling, TLS, and node discovery - designed for development, testing, and MVP environments.)
Availability: Open to freelance (part-time/full-time), short or long-term projects - remote globally.
Contact: Email: gustavofortti@gmail.com LinkedIn: https://www.linkedin.com/in/gustavo-d-12627416a/