What I can help with

Data pipelines

ETL and ELT workflows — from raw sources to clean, queryable data. I've built and maintained pipelines in Databricks, Azure Data Factory, Azure Synapse, and Airflow. Whether you're starting from scratch or untangling something that grew organically, I've seen both.

Database performance

Slow queries, missing indexes, tables that grew beyond their original design. I optimize SQL Server and T-SQL at scale — execution plans, partitioning, archival strategies. The kind of work that turns a 45-second query into a 2-second one.

Cloud data platforms

Setting up or migrating to Databricks, Azure Synapse, or Snowflake. I handle the architecture, the Spark jobs, the notebook workflows, and the part where you explain to stakeholders why it was worth it.

Data quality & monitoring

Automated checks that catch problems before your dashboard does. I build validation layers, anomaly detection, and alerting (Slack, email, whatever your team uses) so bad data doesn't silently poison your reports.

Dashboards & reporting

Power BI dashboards that people actually use — clean data models, sensible measures, and designs that answer real questions instead of just looking busy.

Code refactoring & migrations

Legacy SQL dialects to T-SQL. Airflow to Azure Synapse. Monolithic scripts to modular, testable code. I've refactored 5,000+ lines of Python in a single project and migrated 40+ ETL processes across platforms. The unglamorous work that keeps systems alive.

Where I've been

Over five years in data engineering — from research to consulting to freelance. The consulting years were intense (long hours, weekends, tight deadlines) but genuinely fun: every company had its own data problems, its own tech stack, and its own way of thinking. I got to work across food production, naval engineering, telecommunications, and finance, learning fast and shipping faster. That kind of cross-industry exposure is hard to get any other way, and it's what lets me walk into any codebase, any team, and start delivering quickly.

Sometimes I still miss those days — awesome, hard-working people and a learning curve that never flattened. In 2024 I went freelance to take everything I'd learned and apply it independently, picking the problems I find most interesting and bringing the same intensity to my own clients.

2024 — now
Freelance Data Engineer — Remote. Infrastructure, databases, SQL Server optimization, data quality pipelines, and a web app that gives an entire company visibility into their data. Lately I've been leaning heavily into AI-assisted workflows — I built a custom MCP server that lets me securely query production databases, pull Confluence docs, and inspect Python codebases through a single interface, which has dramatically sped up my day-to-day.
2023 — 2024
Data Engineer at Ernst & Young — Milan. The fastest-moving environment I'd been in: lots of projects, latest cloud technologies, and real client-facing work. I learned how to sit in a room with a client, understand their pain points, and come back with solutions — sometimes the ones they asked for, sometimes better ones we spotted ourselves. This is also where I first saw AI applied seriously to data understanding, and where I got comfortable with how large companies actually adopt the cloud.
2022 — 2023
Data Engineer at Kyndryl — Milan. Internal projects: refactoring legacy systems while building new, better processes from scratch. This is where I really learned that writing code is the easy part — what matters is writing code other people can maintain. Clean structure, consistent conventions, and understanding what each stakeholder actually cares about (reliable pipelines, data they can trust, clear KPIs that drive decisions). Managed 60+ Databricks processes, migrated Airflow to Azure Synapse, and built web apps for data ingestion.
2021 — 2022
Data Engineer at Sopra Steria — Milan. Where my career really started — and where I discovered that I love automation. I maintained 40+ ETL processes on IBM DataStage and built 12 Azure Data Factory pipelines, but the thing I'm proudest of is a script that automatically translated and ran Teradata SQL as T-SQL. It saved over 200 working hours of manual migration work. That early win taught me something I've carried ever since: if you're doing something boring more than twice, automate it.
2021
Data Scientist at Links Foundation — Turin. Research: neural networks for cross-modal retrieval (text ↔ images) and music sentiment classification. My first real job. I walked in knowing Python and walked out knowing PyTorch, data augmentation, and how to read papers without falling asleep.

Education & certifications

Bachelor's in Economics, specialization in Data Science — Università degli Studi di Torino (2019–2022). Thesis: "Estimation of the effect of income inequality on technological development". Not a CS degree, but the data science track taught me to think in data, and the economics taught me to think about why anyone cares.

  • Azure Data Engineer Associate (DP-203)
  • Azure Data Fundamentals (DP-900)

Tech stack

Daily drivers

  • Python
  • T-SQL / SQL Server
  • Databricks / Spark
  • Azure Synapse
  • Azure Data Factory

Comfortable with

  • Power BI
  • Snowflake
  • Teradata
  • C# / .NET
  • Git / GitLab / GitHub

AI & productivity

  • Claude / LLM-assisted development
  • Custom MCP servers
  • PyTorch

Side quests

  • Flutter / Dart
  • TypeScript
  • HTML / CSS
  • Competitive programming

Side projects

Sossoldi

An open-source personal finance app for iOS and Android, built with Flutter. I'm part of the team developing it.

github.com/RIP-Comm/sossoldi

This website

narcismiclaus.com — built from scratch with Astro. Three languages, interactive finance calculators, and the article you're reading right now. No templates, no Bootstrap, just code and opinions.

OSSU Computer Science

Working through the Open Source Society University CS curriculum — filling in the gaps a non-CS degree leaves behind. Algorithms, data structures, systems, the fundamentals that make everything else click.

github.com/ossu/computer-science

Languages

I speak Italian and Romanian natively, English fluently, and enough German to order food and misunderstand the answer. This entire site exists in all three main languages because my audience — and my life — spans all of them.

Let's work together

If you need a data engineer — for a project, a migration, a mess that needs untangling, or just a conversation about whether your data stack is heading in the right direction — send me an email. I read everything. I reply fast. No forms, no chatbots, just a human who actually knows the difference between a left join and an inner join.

consulting@narcismiclaus.com

Or find me on LinkedIn if email feels too formal.