Backend Engineer with 8+ years of experience building reliable, data-intensive enterprise systems using Python, SQL, PySpark, and AWS.
I specialize in:
- Designing resilient backend services
- Data validation & migration (Oracle ↔ PostgreSQL)
- Comparing large JSON/XML files at scale
- Long-running, fault-tolerant data pipelines
- 🔍 Smart file comparison engines (JSON / XML / CSV)
- 🗄️ Database-heavy backend systems (Oracle, PostgreSQL)
- ☁️ Cloud-native solutions on AWS (S3, EC2, EMR)
- ⚙️ Performance optimization & data correctness
- 🔄 Migration validation & reporting systems
Languages & Frameworks
- Python, SQL, PySpark
- Django, REST APIs
Databases
- Oracle, PostgreSQL
Cloud & DevOps
- AWS EC2, S3, EMR, SQS
- Jenkins, Shell Scripting
Data & Tools
- JSON, XML, Parquet
- Git, Linux
A scalable comparison engine to deeply compare complex nested JSON files.
- Auto-detects identifiers in lists
- Handles unordered collections
- Timestamp tolerance support
- Clean diff paths for reporting
Validates large-scale data migration between Oracle and PostgreSQL.
- Extracts data to Parquet
- Compares row-level & schema-level data
- Generates reconciliation reports
A fault-tolerant pipeline for validating data stored on S3.
- Long-running job support
- Retry-safe database connections
- Clean reporting & logging
- Build clean, production-style backend projects
- Share reusable engineering utilities
- Improve system design & scalability knowledge
- Contribute to open-source when possible
- 💼 LinkedIn: linkedin.com/in/vikram-raghuwanshi-0766503a
- 📧 Email: vikramraghuwanshi12@gmail.com
⭐ If you like my work, feel free to star or fork the repositories!