CakeResume 找人才

進階搜尋
On
4 到 6 年
6 到 10 年
10 到 15 年
15 年以上
India
Avatar of P.Koteswar.
Avatar of P.Koteswar.
Cloud Services Manager @AIA
2021 ~ 2022
DevOps Engineer, Site Reliability Engineer
一個月內
DevOps & SRE Jenkins, ArgoCD, Docker, Packers Git , GitLabs , BitBucket, GitOps Maven, Ansible, Jforg, Chef, Vagrant, Msbuild, Build and Release, Deployment Strategies, Management , Vantage, Infra Cost Optimisation. AWS : EC2, S3, IAM, Route53, VPC, Autoscaling, EBS, ELB, ElasticSearch, CloudWatch, RDS, Lambda, EKS, MSK, AWS Secrets , Lambda, AWS DevOps, Kinesis , GaurdDuty, Boto3, AWS CLI Monitoring & Logging Prometheus , AlertManager Grafana Dashboards, AWS CloudWatch, Azure LogAnalytics Azure Monitor, Kibana, Thanos, Graylog , Opsgenie , Slack Network & Security Istio Service Mesh, SSL and TLS Certs Network Firewalls Ingress & Egress Controllers Routing, TGW and VPC Peering Infra As Code Terraform Terragrunt Go Lang ARM Templates, AWS CloudFormation Database
DevOps / CI / CD
Site Reliability Engineering
Terraform/Ansible/Jenkins
就職中
正在積極求職中
全職 / 對遠端工作有興趣
10 到 15 年
Sikkim Manipal University
Information Technology
Avatar of the user.
Avatar of the user.
Full stack software Develpoer @Avidbots India private Limited
2023 ~ 現在
Software Engineer
一個月內
Docker
Docker Compose
JavaScript
就職中
正在積極求職中
全職 / 對遠端工作有興趣
4 到 6 年
Vasavi college of Engineering
Bachelor of Engineering
Avatar of the user.
Avatar of the user.
Lead Software Developer @Persistent Systems
2022 ~ 現在
兩個月內
java
SQL
就職中
正在積極求職中
全職 / 對遠端工作有興趣
4 到 6 年
VIT-Vellore
Computer Science
Avatar of Devraj Kumar.
Avatar of Devraj Kumar.
Staff Engineer @NextGen Healthcare India
2019 ~ 現在
Senior Software Developer
三個月內
Devraj Kumar Staff Engineer Bengaluru, Karnataka, India Dynamic Staff Engineer with a 12-year track record at NextGen Healthcare, wielding a robust tech stack including C#, ASP.Net MVC, Web API, SQL Server, and AWS. B. Tech (Computer Science) graduate eager to leverage extensive experience in software development, team leadership, and innovative problem-solving to contribute to your team's success. Proficient with modern tools like Microsoft Visual Studio 2022, SSMS, JIRA, Salesforce and versed in Agile methodologies, I am passionate about driving projects to new heights and delivering exceptional [email protected]
C#.NET development
PL/SQL
LINQ
就職中
正在積極求職中
全職 / 對遠端工作有興趣
10 到 15 年
North Maharastra University, Jalgaon, Maharastra
Computer Science & Engineering
Avatar of Harish Kumar.
Avatar of Harish Kumar.
Sr DevOps Engineer @SoftElevation
2022 ~ 現在
Senior DevOps Engineer
一個月內
Harish Kumar "Mastering the Art of DevOps: Elevating Skills in Sync with Company Growth" Mohali, India Work Experience Sr DevOps Engineer • SoftElevation NovemberPresent -- Working on the product in TEVVO(HealthCare) as build and release management * Using Different Tech Eg- Aws, OpenSearch, Kafka, bitbucket pipelines, Vpn, Terraform Etc. -- Managing other product GMT(HealthCare) * Using Rds, Ec2, Sns, Ses, Jenkins, Terraform, Load Balancer, Autoscaling Etc. -- Managing one more project Maxi Locker (like Digi locker) * Using the same as above with Lamda, IOT services, IAM, and CloudWatch agents for logs Sr. Automation Engineer • Master Software Solutions OctoberMayManaging in-house projects
就職中
目前會考慮了解新的機會
全職 / 對遠端工作有興趣
6 到 10 年
Sikkim Manipal University - Distance Education
Information Technology
Avatar of Harshit Jamwal.
Avatar of Harshit Jamwal.
Consultant ll @EY
2023 ~ 現在
Consultant
兩個月內
Harshit Jamwal 5+ years of Service-now Developer experience in Service-now framework and Agile software development with ITSM, HRSD, ITOM, process with strong knowledge of ITIL framework and SDLC. Work Experience Consultant ll • Ernst & Young LLP Project 1: Anheuser-Busch InBev J ulyJan 2024 Description: Implemented End-to-end ServiceNow Discovery to streamline IT infrastructure management and enhance operational efficiency. Hands-on experience in configuring and deploying Discovery schedules for both on-premises and cloud environments. Proficient in setting up and managing Discovery for diverse cloud platforms, including AWS, and Azure as well as on
ITSM
HRSD
ITOM
就職中
目前會考慮了解新的機會
全職 / 對遠端工作有興趣
4 到 6 年
Birla Institute of Technology and Science, Pilani
Computer Science
Avatar of Atif Ahmad.
Avatar of Atif Ahmad.
IT Analyst I Service Integration & Management @Tata Consultancy Services
2014 ~ 現在
IT Analyst I Service Integration & Management
兩個月內
Atif Ahmad IT Analyst I Service Integration & Management A dynamic, team spirited performance driven professional having around 16+ years of rich ITSM/Finance experience. An ITIL Foundation Certified Professional. Worked on complete Service delivery and end to end Project Management for complete ITSM for ServiceNow suite related to various global implementation/ operational projects. Looking for a challenging opportunity in Transition, Incident Management, Change Management, Problem Management and Service delivery role. [email protected], India linkedin.com/in/atif-ahmad-425a76127 Work Experience IT Analyst I Service Integration & Project
AWS MDM
Communication
ServiceNow
就職中
全職 / 對遠端工作有興趣
10 到 15 年
Sikkim Manipal University
M.B.A Finance
Avatar of Krishna Kelam.
Avatar of Krishna Kelam.
HR/Administration Lead @Sysha
2018 ~ 現在
US Payroll Specialist
一個月內
Krishna [email protected] CAREER OBJECTIVE : Intend to build a career in an organization that entrusts a professionally challenging environment and provides growth opportunities in this organization. Profile Summary: 10 years of experience in US IT HR/Accounts and Payroll. Skills: Google Suite, Zoho People, Zoho Books, Quick Books Online, Intuit Payroll Service, ADP Payroll, AWS EC2, Bengaluru, Karnataka, India Work Experience: HR/Administration Lead • Sysha Inc MarchPresent On boarding process for H1B/ OPT any other Paid employees, C2C contractors, 1099 consultants and W2 employees. Maintain Contractors documents – MSA/
Word
Excel
Google Drive
就職中
全職 / 對遠端工作有興趣
6 到 10 年
Acharya Nagarjuna University
Accounts and Finance
Avatar of the user.
Avatar of the user.
Infrastructure Technology Specialist @Cognizant Technology Solutions
2021 ~ 現在
Infrastructure Technology Specialist
兩個月內
VMware vSphere
VMWare ESXi
SCCM Administrator
就職中
全職 / 我只想遠端工作
10 到 15 年
All India Institute of Science & Research
Computers
Avatar of RAJESH MOURYA.
Avatar of RAJESH MOURYA.
Sr. PHP Developer Lead Developer @Byasa Tech Solutions Pvt Ltd
2021 ~ 現在
Sr. PHP Developer Lead Developer
一個月內
Sr. PHP Developer • Publicis Groupe Private Limited MayPresent 1. Working on Spryker / Symfony / Laravel Framework. 2. Responsible for managing and driving small teams. 3. Creating Custom Extensions / Custom Modification. 4. Writing reusable, testable and efficient code. 5. AWS,Docker,Redis,RabbitMQ. 6. Working on Ecommerce B2B/B2C Marketplace / SaaS Application. 7. Following Agile Development / Test Driven Development Methodology. Sr. PHP Developer (Team Lead) • Byasa Tech Solutions Pvt Ltd MayPresent 1. Creating a Laravel-based Application. 2
Laravel PHP Framework
CodeIgniter Framework
WORDPRESS DEVELOPER
就職中
全職 / 對遠端工作有興趣
4 到 6 年
Viva College
BSC IT

最輕量、快速的招募方案,數百家企業的選擇

搜尋履歷,主動聯繫求職者,提升招募效率。

  • 瀏覽所有搜尋結果
  • 每日可無限次數開啟陌生對話
  • 搜尋僅開放付費企業檢視的履歷
  • 檢視使用者信箱 & 電話
搜尋技巧
1
嘗試搜尋最精準的關鍵字組合
資深 後端 php laravel
如果結果不夠多,再逐一刪除較不重要的關鍵字
2
將須完全符合的字詞放在雙引號中
"社群行銷"
3
在不想搜尋到的字詞前面加上減號,如果想濾掉中文字,需搭配雙引號使用 (-"人資")
UI designer -UX
免費方案僅能搜尋公開履歷。
升級至進階方案,即可瀏覽所有搜尋結果(包含數萬筆覽僅在 CakeResume 平台上公開的履歷)。

職場能力評價定義

專業技能
該領域中具備哪些專業能力(例如熟悉 SEO 操作,且會使用相關工具)。
問題解決能力
能洞察、分析問題,並擬定方案有效解決問題。
變通能力
遇到突發事件能冷靜應對,並隨時調整專案、客戶、技術的相對優先序。
溝通能力
有效傳達個人想法,且願意傾聽他人意見並給予反饋。
時間管理能力
了解工作項目的優先順序,有效運用時間,準時完成工作內容。
團隊合作能力
具有向心力與團隊責任感,願意傾聽他人意見並主動溝通協調。
領導力
專注於團隊發展,有效引領團隊採取行動,達成共同目標。
一個月內
India
專業背景
目前狀態
求職階段
專業
產業
工作年資
管理經歷
技能
語言能力
求職偏好
希望獲得的職位
預期工作模式
期望的工作地點
遠端工作意願
接案服務
學歷
學校
主修科系
列印

Subham Sahu

Seeking a challenging environment that encourages learning, creativity, provides exposures to new ideas and stimulates personal and professional growth along with organizational growth. 

[email protected]

+91- 9039347186

HSR Layout Sector 5, Bangalore, Karnataka, 560034

Technical Skills


Cloud Skills

MS Azure, Azure Data Factory, Data Lake, Azure Dev-Ops, Azure Synapse Analytics, Azure Data bricks, ETL/ELT, Storage Blob, Azure function, logic apps, Delta lake, Kafka, Grafana, Streaming Data

Programming 

Python, Pandas, PySpark, Beautiful Soup
Spark SQL, Scala

Database

MS SQL SERVER, Azure SQL Database, T-SQL, MySQL, Snowflake, Cosmos DB, Hive, Delta tables, IBM DB2 i series


Professional Summary


  • 6.3+ years of experience in Microsoft Azure cloud platform such as Azure Data Factory, Data Bricks, Data Lake, Azure Synapse analytics, functions, logic Apps, SQL DB, Cosmos DB, PySpark and Python along with Aviation, Oil & Gas, Energy Sector and Pharmaceutical domain knowledge.
  • Creates Several Relational and Non-relational Data Modelling of Data and Creates Prototype diagram using Draw.io.
  • Cleaning and transformation of complex data using data pipelines & notebook in the Azure cloud using Data Factory and Data Bricks with PySpark
  • Perform root cause analysis and resolve the production and data issues.
  • Responsible for design, development, modification, debug and maintenance of Data Pipelines.
  • Delivers technical accountability for team specific work products within an application and provide technical support during solution design for new requirements.
  • Maintains the Existing project along with new development using JIRA in Agile methodology to euiinhance the growth of productivity. 
  • Utilize sound engineering practices to deliver functional, stable, and scalable solutions to new or existing problems.
  • Involves in requirement analysis and business discussions with clients, delivery process, etc. 
  • Excellent interpersonal skills with strong analytical and problem-solving skills. 

Work Experience

Publicis Sapient, April 2023 - Present

Senior Associate Data Engineering L2 (MS Azure Cloud, PySpark)

  • Working on Orx RFP Healthcare & Insurance data of Client. 
  • Design the data model for data migration.
  • Design and Implementation of the Data ingestion using data pipelines and transformation pipelines through Azure Data factory, Databricks and Kafka streaming/Batch data.
  • Maintains technical documentation using  Confluence platform
  • Manage the projects using Agile Methodology.

Ness Digital Engineering, May 2021 - April-2023

Senior Data Engineer (MS Azure Cloud, PySpark)

  • Works on Clinical and drug trial data of Giant organizations. 
  • Design the non-relational common Data model on Cosmos DB.
  • Analysis of existing relational data for data model creation. 
  • Design and Implementation of the Data ingestion using data pipelines and transformation pipelines in Azure Data factory, Databricks and Azure Synapse analytics.
  • Maintains technical documentation using Docusaurus and Confluence platform
  • Manage the teams and projects using Agile Methodology along with Client Interaction.

IHS Markit ltd., Oct 2017 - Apr 2021

Data Engineer (MS Azure Cloud, PySparkPython)

  • Extract the complex and Bulk Data related to Aviation, Gas, Energy Sector
  • Investigate issues by reviewing/debugging pipelines, provide fixes, workarounds, and review changes for operability to maintain existing data solutions.
  • Experience of building data pipelines using Databricks (Azure Data Factory and Apache Spark).
  • We extract information by using Python and Kofax RPA tool for fast Crawling.
  • The general orientation of team to reduce manual efforts, growth of products and saving of time during ETL/ELT Process.
  • Have done the integration of third-party tools such as Shutil, Pandas, file transfer from local to azure storage container cloud services using Python.

Project

Healthcare and Insurance Analytics

ORX RFP DMA Explorer Analytics: -

We are building data Application and Framework that contains the data about the Healthcare & Insurance analytics and broadcast the visualizations through power BI reports for business users. Stores the huge data at Lakehouse level and process the data for some ML application.  
  • Used various technologies like MySQL, IBM DB2 i series, Data Lake, Spark SQL, GCP Kafka, Scala, Delta lake & Tables, PySpark and Python.
  • We pull the data from source relational DB such as MySQL and IBM DB2 and move data to Data lake in form of Parquet.
  • Created the Common Data Model using draw.io.
  • Created Azure Data Factory pipelines for process of ETL\ELT.
  • Implement custom logics for transformation and automation in Azure Databricks Notebook
  • Using Azure Monitor, Grafana to monitors the ADF Data pipelines and GCP Kafka jobs
  • Implemented CI/CD for moving pipelines/Scripts from one environment to another by Repos branch strategy using Azure DevOps
  • Manage the sprint planning, Backlog refinement and retrospectives Using JIRA in Agile methodology.
  • Maintained the documentation on Confluence platform.
  • Generates the Claim, hospitalization and expenses reports using Power BI.
  • Published the reports sharing the pbix file on product portal for Business purpose.

Clinical and Drug Trial Analytics

Pharma, Healthcare and Drugs trials: -

We are building Application that contains the data about the Pharma, Healthcare and Drugs trial and broadcast the visualizations through power BI reports. Stores the huge data at data warehouse level and process the data for some ML application.  
  • Used various technologies like MySQL, MS SQL SERVER, Azure Synapse, Data Lake, PySpark and Cosmos DB.
  • We pull the data from relational DB such as MySQL and SQL Server and move data to cosmos DB after the several transformations.
  • Created the Common Data Model using SQL API of Cosmos DB using draw.io.
  • Created Azure Synapse data analytics pipelines for process of ETL\ELT.
  • Implement custom logics for transformation and automation in Synapse Notebook
  • Using Azure Monitor, New Relic & analytics to monitors the Synapse Data pipelines
  • Implemented CI/CD for moving pipelines/Scripts from one environment to another by Repos branch strategy using Azure DevOps
  • Manage the sprint planning, Backlog refinement and retrospectives Using JIRA in Agile methodology.
  • Maintained the documentation on Docusaurus and Confluence platform.
  • Generates the ingredient, excipients, numerator devices, dosage, artifact, sub artifact, products reports using Snowflake & Power BI.
  • Published the reports sharing the pbix file on product portal for Business purpose.

Energy Analytics

Oil, Gas and Coal, OMDC:-

We are building products that contains the information about the oil and gas prices, tenders bidding, consumption and production data country-wise with other factors included.
  • Used various technologies like Python, Azure Database, Data Factory, Data Lake, Data bricks, PySpark, T-SQL, Pandas and Power BI
  • We crawl Complex data from Business, external resources and websites using Python and dump the Files into Azure blobs and data lake.
  • Created Azure Data Factory data pipelines, Activities, Linked services, IR, Triggers for process of ETL/ELT.
  • Write Azure Functions to Implement custom logics for transformation and automation in Python scripts.
  • Using Azure Monitor & analytics to monitors the ADF pipelines. Implemented CI/CD for moving pipelines/Scripts from one environment to another by ARM templates using Azure DevOps.
  • Generates the prices, production, consumption comparison reports using Power BI.

Aviation, IHS Markit Ltd.

Cargo & Flight BI: -

We are having the several pipeline which populates the data of Cargo, Shipment, Booking etc. for multiple Marts. On basis of this, Daily, weekly and monthly Report are generated.
  • Performed several transformations, structuring and cleansing on data including various transformations using Pyspark, Spark SQL and Delta tables.
  • Built multiple data pipelines and job clusters using Azure Data Factory and Databricks.
  • Handling of data on basis of refresh date and SQP date for Incremental load.
  • Worked in Agile methodology using JIRA
  • Highly proficient in using Spark-SQL for developing complex joins and aggregations.
  • Hands on experience on Synapse data warehousing of External tables through Parquet files in Data Lake.
  • Azure Data factory, Azure Data-lake, Azure Databricks, Delta Table, DevOps, Pyspark and Delta Tables. 
  • Cleaning of Cargo & flight data and movement from MS SQL, Hive, traditional Hadoop system and SFTP to Azure Data-lake and Delta tables in Databricks Using Azure Data factory pipelines and Databricks notebook.

Certifications

  • Certification in Azure Data Fundamentals from Microsoft Azure.    
  • Certification in Databricks Certified Data Engineer Associate from Databricks.
  • Certification in Databricks Certified Apache Spark Developer Associate 3.0 from Databricks.
  • Certification in Databricks Accredited Lakehouse fundamentals from Databricks. 
  • Certification In Master Data Analysis with Python - Intro To pandas from Udemy.

Rewards and Achievement

  • Achieved Team player award for Pharma & Clinical project in Q3, 2021.
  • Achieved Best performance Award for Energy analytics project in Q2, 2020.
  • Achieved Peer Award for Optimization of Parts intelligence pipelines Q3, 2019.
  • Achieved Rewards as Team player for Energy Analytics projects in Q4, 2018.   

Education

 B.E. in Electronics Engineering - 73.46% (2016) 
 Institute of Engineering, JIWAJI University, Gwalior 

Core Skills & Strengths 

 ● Team Management          ● Leadership Quality 

 ● Passionate and Creative  ● Quick Learner  

 ● Positive Thinking               ● Punctual 

 ● Motivated                           ● Flexible

Area of Interest 

 ● Interacting with people.  

 ● Willingness to learn new skills.

 ● Cooking                             

 ● Chess

履歷
個人檔案

Subham Sahu

Seeking a challenging environment that encourages learning, creativity, provides exposures to new ideas and stimulates personal and professional growth along with organizational growth. 

[email protected]

+91- 9039347186

HSR Layout Sector 5, Bangalore, Karnataka, 560034

Technical Skills


Cloud Skills

MS Azure, Azure Data Factory, Data Lake, Azure Dev-Ops, Azure Synapse Analytics, Azure Data bricks, ETL/ELT, Storage Blob, Azure function, logic apps, Delta lake, Kafka, Grafana, Streaming Data

Programming 

Python, Pandas, PySpark, Beautiful Soup
Spark SQL, Scala

Database

MS SQL SERVER, Azure SQL Database, T-SQL, MySQL, Snowflake, Cosmos DB, Hive, Delta tables, IBM DB2 i series


Professional Summary


  • 6.3+ years of experience in Microsoft Azure cloud platform such as Azure Data Factory, Data Bricks, Data Lake, Azure Synapse analytics, functions, logic Apps, SQL DB, Cosmos DB, PySpark and Python along with Aviation, Oil & Gas, Energy Sector and Pharmaceutical domain knowledge.
  • Creates Several Relational and Non-relational Data Modelling of Data and Creates Prototype diagram using Draw.io.
  • Cleaning and transformation of complex data using data pipelines & notebook in the Azure cloud using Data Factory and Data Bricks with PySpark
  • Perform root cause analysis and resolve the production and data issues.
  • Responsible for design, development, modification, debug and maintenance of Data Pipelines.
  • Delivers technical accountability for team specific work products within an application and provide technical support during solution design for new requirements.
  • Maintains the Existing project along with new development using JIRA in Agile methodology to euiinhance the growth of productivity. 
  • Utilize sound engineering practices to deliver functional, stable, and scalable solutions to new or existing problems.
  • Involves in requirement analysis and business discussions with clients, delivery process, etc. 
  • Excellent interpersonal skills with strong analytical and problem-solving skills. 

Work Experience

Publicis Sapient, April 2023 - Present

Senior Associate Data Engineering L2 (MS Azure Cloud, PySpark)

  • Working on Orx RFP Healthcare & Insurance data of Client. 
  • Design the data model for data migration.
  • Design and Implementation of the Data ingestion using data pipelines and transformation pipelines through Azure Data factory, Databricks and Kafka streaming/Batch data.
  • Maintains technical documentation using  Confluence platform
  • Manage the projects using Agile Methodology.

Ness Digital Engineering, May 2021 - April-2023

Senior Data Engineer (MS Azure Cloud, PySpark)

  • Works on Clinical and drug trial data of Giant organizations. 
  • Design the non-relational common Data model on Cosmos DB.
  • Analysis of existing relational data for data model creation. 
  • Design and Implementation of the Data ingestion using data pipelines and transformation pipelines in Azure Data factory, Databricks and Azure Synapse analytics.
  • Maintains technical documentation using Docusaurus and Confluence platform
  • Manage the teams and projects using Agile Methodology along with Client Interaction.

IHS Markit ltd., Oct 2017 - Apr 2021

Data Engineer (MS Azure Cloud, PySparkPython)

  • Extract the complex and Bulk Data related to Aviation, Gas, Energy Sector
  • Investigate issues by reviewing/debugging pipelines, provide fixes, workarounds, and review changes for operability to maintain existing data solutions.
  • Experience of building data pipelines using Databricks (Azure Data Factory and Apache Spark).
  • We extract information by using Python and Kofax RPA tool for fast Crawling.
  • The general orientation of team to reduce manual efforts, growth of products and saving of time during ETL/ELT Process.
  • Have done the integration of third-party tools such as Shutil, Pandas, file transfer from local to azure storage container cloud services using Python.

Project

Healthcare and Insurance Analytics

ORX RFP DMA Explorer Analytics: -

We are building data Application and Framework that contains the data about the Healthcare & Insurance analytics and broadcast the visualizations through power BI reports for business users. Stores the huge data at Lakehouse level and process the data for some ML application.  
  • Used various technologies like MySQL, IBM DB2 i series, Data Lake, Spark SQL, GCP Kafka, Scala, Delta lake & Tables, PySpark and Python.
  • We pull the data from source relational DB such as MySQL and IBM DB2 and move data to Data lake in form of Parquet.
  • Created the Common Data Model using draw.io.
  • Created Azure Data Factory pipelines for process of ETL\ELT.
  • Implement custom logics for transformation and automation in Azure Databricks Notebook
  • Using Azure Monitor, Grafana to monitors the ADF Data pipelines and GCP Kafka jobs
  • Implemented CI/CD for moving pipelines/Scripts from one environment to another by Repos branch strategy using Azure DevOps
  • Manage the sprint planning, Backlog refinement and retrospectives Using JIRA in Agile methodology.
  • Maintained the documentation on Confluence platform.
  • Generates the Claim, hospitalization and expenses reports using Power BI.
  • Published the reports sharing the pbix file on product portal for Business purpose.

Clinical and Drug Trial Analytics

Pharma, Healthcare and Drugs trials: -

We are building Application that contains the data about the Pharma, Healthcare and Drugs trial and broadcast the visualizations through power BI reports. Stores the huge data at data warehouse level and process the data for some ML application.  
  • Used various technologies like MySQL, MS SQL SERVER, Azure Synapse, Data Lake, PySpark and Cosmos DB.
  • We pull the data from relational DB such as MySQL and SQL Server and move data to cosmos DB after the several transformations.
  • Created the Common Data Model using SQL API of Cosmos DB using draw.io.
  • Created Azure Synapse data analytics pipelines for process of ETL\ELT.
  • Implement custom logics for transformation and automation in Synapse Notebook
  • Using Azure Monitor, New Relic & analytics to monitors the Synapse Data pipelines
  • Implemented CI/CD for moving pipelines/Scripts from one environment to another by Repos branch strategy using Azure DevOps
  • Manage the sprint planning, Backlog refinement and retrospectives Using JIRA in Agile methodology.
  • Maintained the documentation on Docusaurus and Confluence platform.
  • Generates the ingredient, excipients, numerator devices, dosage, artifact, sub artifact, products reports using Snowflake & Power BI.
  • Published the reports sharing the pbix file on product portal for Business purpose.

Energy Analytics

Oil, Gas and Coal, OMDC:-

We are building products that contains the information about the oil and gas prices, tenders bidding, consumption and production data country-wise with other factors included.
  • Used various technologies like Python, Azure Database, Data Factory, Data Lake, Data bricks, PySpark, T-SQL, Pandas and Power BI
  • We crawl Complex data from Business, external resources and websites using Python and dump the Files into Azure blobs and data lake.
  • Created Azure Data Factory data pipelines, Activities, Linked services, IR, Triggers for process of ETL/ELT.
  • Write Azure Functions to Implement custom logics for transformation and automation in Python scripts.
  • Using Azure Monitor & analytics to monitors the ADF pipelines. Implemented CI/CD for moving pipelines/Scripts from one environment to another by ARM templates using Azure DevOps.
  • Generates the prices, production, consumption comparison reports using Power BI.

Aviation, IHS Markit Ltd.

Cargo & Flight BI: -

We are having the several pipeline which populates the data of Cargo, Shipment, Booking etc. for multiple Marts. On basis of this, Daily, weekly and monthly Report are generated.
  • Performed several transformations, structuring and cleansing on data including various transformations using Pyspark, Spark SQL and Delta tables.
  • Built multiple data pipelines and job clusters using Azure Data Factory and Databricks.
  • Handling of data on basis of refresh date and SQP date for Incremental load.
  • Worked in Agile methodology using JIRA
  • Highly proficient in using Spark-SQL for developing complex joins and aggregations.
  • Hands on experience on Synapse data warehousing of External tables through Parquet files in Data Lake.
  • Azure Data factory, Azure Data-lake, Azure Databricks, Delta Table, DevOps, Pyspark and Delta Tables. 
  • Cleaning of Cargo & flight data and movement from MS SQL, Hive, traditional Hadoop system and SFTP to Azure Data-lake and Delta tables in Databricks Using Azure Data factory pipelines and Databricks notebook.

Certifications

  • Certification in Azure Data Fundamentals from Microsoft Azure.    
  • Certification in Databricks Certified Data Engineer Associate from Databricks.
  • Certification in Databricks Certified Apache Spark Developer Associate 3.0 from Databricks.
  • Certification in Databricks Accredited Lakehouse fundamentals from Databricks. 
  • Certification In Master Data Analysis with Python - Intro To pandas from Udemy.

Rewards and Achievement

  • Achieved Team player award for Pharma & Clinical project in Q3, 2021.
  • Achieved Best performance Award for Energy analytics project in Q2, 2020.
  • Achieved Peer Award for Optimization of Parts intelligence pipelines Q3, 2019.
  • Achieved Rewards as Team player for Energy Analytics projects in Q4, 2018.   

Education

 B.E. in Electronics Engineering - 73.46% (2016) 
 Institute of Engineering, JIWAJI University, Gwalior 

Core Skills & Strengths 

 ● Team Management          ● Leadership Quality 

 ● Passionate and Creative  ● Quick Learner  

 ● Positive Thinking               ● Punctual 

 ● Motivated                           ● Flexible

Area of Interest 

 ● Interacting with people.  

 ● Willingness to learn new skills.

 ● Cooking                             

 ● Chess