CakeResume Talent Search

Advanced filters
On
4-6 years
6-10 years
10-15 years
More than 15 years
Avatar of 陳耀騰.
Avatar of 陳耀騰.
Past
CTO @龍盛
2018 ~ Present
PM/產品經理/專案管理
Within one month
Aston Chen 網路相關工作經驗 20 年 PM/產品經理/專案管理 城市,TW [email protected] 學歷 勤益科大, 工業工程管理, 1990 ~ 1992 技能 系統: Mac, CentOS / RedHat6, Mysql Server: CentOS / RedHat6 / GCP DB: MariaDB(mysql) Web Server : Apache, Nginx 開發環境: MAMP, DNMP 軟體: bash, PHP, JavaScript, Git Shell Script, Codelgniter, Javascript, JQuery, Git Laravel, Vue, Nuxt 硬體: MBP, IBM xserver, Dell Server, F5,NS 進度控
MySql
PHP
Laravel
Unemployed
Ready to interview
Full-time / Interested in working remotely
More than 15 years
勤益科技大學
工業工程管理
Avatar of Frank Ramaglia Jr..
Avatar of Frank Ramaglia Jr..
Service Desk Consultant Level 2 @Wolf Consulting LLC
2023 ~ Present
Technical Support Specialist
Within two months
expectations. - Became proficient in the MaaS360 Mobile Device management system. - Managed customer inventory and device provisioning in-house. - Worked mainly with IOS and iPad devices - Maintained notation and documentation in the Zendesk ticketing system to manage individual customer needs and changes. - Respond to tickets via multiple methods of communication including email, web, and phone. - Managed features and maintained cellular lines via multiple carriers for smartphones, basic phones, tablets, modems, and other data devices. Assisted in the installation of cellular modems for internal network usage. Technical Support Specialist • Credit Management Co...
Troubleshooting
Evaluations
Phone System Administration
Employed
Open to opportunities
Full-time / Remote Only
More than 15 years
ITT Technical Institute
Computer Network Systems
Avatar of the user.
Avatar of the user.
IT and data management coordinator @PT. Restorasi Ekosistem Indonesia - Hutan Harapan
2022 ~ Present
IT Specialist
Within one month
Network Administration
Network Security
Network Engineering
Employed
Full-time / Interested in working remotely
6-10 years
Universitas Sriwijaya
Sistem Komputer
Avatar of the user.
Avatar of the user.
Developer Analyst @Montreal Informática SA
2022 ~ Present
TI Analyst
More than one year
Network Management
Web Server Apache
Development
Full-time / Remote Only
6-10 years
Pontifical Catholic University of Minas Gerais
IT Governance
Avatar of the user.
Avatar of the user.
IT 部門主管 @叁叁網路有限公司
2019 ~ 2020
CTO, Director Of IT, IT Architect
More than one year
PHP
Golang
Web Development
Employed
Full-time / Interested in working remotely
6-10 years
國立台灣大學
工業工程學研究所
Avatar of 蔡良方.
Avatar of 蔡良方.
資深後端工程師 @一七直播服務有限公司
2021 ~ Present
後端工程師
Within two months
●●● ● ● - Gitlab ●● ● ●● - Github ●● ●● ● - Visual Studio Code VC ●●●● ● Protocol - TCP ●●● ●● - HTTP ●●●● ● IDE - Visual Studio Code ●●●●● - Android Studio ●●● ●● - Visual Studio ●●● ●● 套件管理 - Composer ●●●● ● - NPM ●●● ●● - apt ●● ●●● Web Server Apache 2 ●●● ● ● SQL - PostgreSQL ●● ● ●● - MySQL ●● ●●● - Redis ● ●●●● 測試工具 - Postman ●●● ●● - Swagger ●● ●●● - phpunit ●●● ●● 輔助工具 - Asana ●●● ●● - Notion ●●● ● ● - PlantUML ●●● ●● 個人專案 Side
PHP
git
MySQL
Employed
Not open to opportunities
Full-time / Interested in working remotely
4-6 years
臺灣師範大學
機電工程學系
Avatar of 黃聖雄.
Avatar of 黃聖雄.
後端工程師 @奧丁丁集團英屬開曼群島商台灣子公司_歐簿客科技股份有限公司
2020 ~ Present
後端工程師
Within two months
Email Service Other Google SEO、Google Analytics Facebook API Line BOT API Telegram BOT API Firebase Cloud Messaging Fail2Ban Sentry Telegraf+InfluxDB+Kapacitor+ Chronograf Elasticsearch+Logstash+MySQL 全文檢索 Elasticsearch+Filebeat+Kibana MailHog Revive Adserver Docker Version Control Git / GitLab Web Server Apache Nginx IDE Visual Studio Code Eclipse Sublime Text Notepad++ 工作經歷 集邦科技股份有限公司,軟體工程師,2016 年 9 月 - 至今 1. Technews.tw 科技新報 開發
Apache
Nginx
PHP
Employed
Full-time / Not interested in working remotely
4-6 years
高雄應用科技大學
資訊工程
Avatar of Johnny Kuo.
Avatar of Johnny Kuo.
雲端技術應用實務班學員 @勞動部勞動力發展署中彰投分署
2019 ~ 2019
Internet 程式設計工程師
Within one year
.com 工作經歷 勞動部勞動力發展署中彰投分署, 雲端技術應用實務班學員, Apr 2019 ~ Sep作業系統: Linux(Red Hat), Windows。 2. 檔案伺服器File Server: Net2FTP、Vsftp。 3. 網站伺服器Web Server: Apache。 4. 資料庫DataBase: MySQL, SQLite。 5. 架站軟體:WordPress、BBClone。 6. 程式語言: HTML5、PHP、Java、JavaScript、XML、Ajax、JSON、SQL。 7. 開發工具:Android
JavaScript
CSS3
HTML5
Full-time / Interested in working remotely
6-10 years
東海大學
社會系
Avatar of Mallikarjunareddy Guruguntla.
Offline
Big data developer
More than one year
Sqoop, Flume, HBase, Hive, Presto, Oozie and Spark. Good understanding of NoSQL databases and hands on working experience in writing applications on NoSQL databases like HBase, Cassandra and MongoDB. Knowledge on HADOOP security services like RANGER and KNOX configured over HDP cluster and have sufficient experience on Application Server TOMCAT and Web Server APACHE. Played a vital role in Launching spark on Yarn and pretty good knowledge on spark configuration and monitoring Scheduled jobs. Developed couple of spark applications with the help of PySpark, SPARK SQL and DATA FRAME API Not only SPARK SQL but also
hadoop ecosystem
Python
Scala
Full-time / Interested in working remotely
6-10 years
JNTUH
Computer science
Avatar of 蔡承佑.
Avatar of 蔡承佑.
Fullstack Engineer @美商知識能股份有限公司
2019 ~ Present
Back-End / Full Stack Web Developer
Within one month
蔡承佑 Chengyu Tsai 7+ years working experience in software development. Now work on-site in Kono as a Fullstack Engineer, responsible for system design, development and integration. Familiar with Ruby on Rails, Node.js, Javascript etc. Problem solver, team-oriented, effective communicator. Enthusiastic in coding. Backend Engineer / Fullstack Engineer Taipei,TW [email protected]技能 Skills Backend: Node.js, Ruby on Rails, PHP, Python Frontend: Javascript, React/Redux, CSS Database: MySQL, MongoDB, Redis Web server: Nginx, Apache Testing: Selenium, Mocha, Rspec, Locust Version Control: Git
Node.js
PHP
MySQL
Employed
Open to opportunities
Full-time / Interested in working remotely
6-10 years
國立台北科技大學
互動設計

The Most Lightweight and Effective Recruiting Plan

Search resumes and take the initiative to contact job applicants for higher recruiting efficiency. The Choice of Hundreds of Companies.

  • Browse all search results
  • Unlimited access to start new conversations
  • Resumes accessible for only paid companies
  • View users’ email address & phone numbers
Search Tips
1
Search a precise keyword combination
senior backend php
If the number of the search result is not enough, you can remove the less important keywords
2
Use quotes to search for an exact phrase
"business development"
3
Use the minus sign to eliminate results containing certain words
UI designer -UX
Only public resumes are available with the free plan.
Upgrade to an advanced plan to view all search results including tens of thousands of resumes exclusive on CakeResume.

Definition of Reputation Credits

Technical Skills
Specialized knowledge and expertise within the profession (e.g. familiar with SEO and use of related tools).
Problem-Solving
Ability to identify, analyze, and prepare solutions to problems.
Adaptability
Ability to navigate unexpected situations; and keep up with shifting priorities, projects, clients, and technology.
Communication
Ability to convey information effectively and is willing to give and receive feedback.
Time Management
Ability to prioritize tasks based on importance; and have them completed within the assigned timeline.
Teamwork
Ability to work cooperatively, communicate effectively, and anticipate each other's demands, resulting in coordinated collective action.
Leadership
Ability to coach, guide, and inspire a team to achieve a shared goal or outcome effectively.
More than one year
India
Professional Background
Current status
Job Search Progress
Professions
Data Scientist
Fields of Employment
Work experience
6-10 years
Management
Skills
hadoop ecosystem
Python
Scala
Hive
HBase
Sqoop
Flume
Databases
Languages
Job search preferences
Positions
Big data developer
Job types
Full-time
Locations
Remote
Interested in working remotely
Freelance
Educations
School
Major
Print
IMG_7370.JPG

Mallikarjuna Reddy G

6+ years of IT experience in Architecture, Analysis, design, development, implementation, maintenance and support with experience in developing strategic methods for deploying big data technologies to efficiently solve Big Data processing requirement.
 Around 3+ years of experience on BIG DATA using HADOOP framework and related technologies such as HDFS, HBASE, MAPREDUCE, SPARK, HIVE, PIG, FLUME, OOZIE, SQOOP and ZOOKEEPER.

Summary

Excellent understanding /knowledge on HADOOP(Gen-1 and Gen-2) and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node, Resource Manager (YARN), Node Manager and Aplication Master. 

Expert in understanding the data and designing/implementing the enterprise platforms like Hadoop data lake and huge Data warehouses.

Have over 2 years of experience as Hadoop Architect with very good exposure on Hadoop Technologies like HDFS, YARN, MapReduce, Sqoop, Flume, HBase, Hive, Presto, Oozie and Spark.

Good understanding of NoSQL databases and hands on working experience in writing applications on NoSQL 
databases like HBase, Cassandra and MongoDB.
Knowledge on HADOOP security services like RANGER and KNOX configured over HDP cluster and have sufficient experience on Application Server TOMCAT and Web Server APACHE.

Played a vital role in Launching spark on Yarn and pretty good knowledge on spark configuration and monitoring Scheduled jobs.
Developed couple of spark applications with the help of PySpark, SPARK SQL and DATA FRAME API Not only SPARK SQL but also great hands of expertise on SPARK STREAMING.
Worked on Spark Jobs In-order to do Tuning and Optimizing an end to end BENCHMARK which includes setting up different Conf and LEVERAGING Spark SPECULATION to Identify and Re-Schedule Slow Running Tasks.
Developed analytical components using PySpark, SCALA, SPARK, STORM and SPARK STREAM. 
Worked extensively with DIMENSIONAL MODELING, DATA MIGRATION, DATA CLEANSING, DATA PROFILING, and ETL PROCESS features for data warehouses. 
Used UNIX/Linux shell scripting to automate system administration tasks, system backup/restore management and user account management.
Worked on data load from various sources i.e., Oracle, My-SQL, DB2, My-SQL SERVER to CASSANDRA, MONGO DB, HADOOP using SQOOP and PYTHON SCRIPT. 
Knowledge in creating the TABLEAU dashboards with relational and multi-dimensional databases including Oracle, MYSQL and HIVE, gathering and manipulating data from various sources. 
Having experience in Mark-Logic architecture and design, in performance tuning, dashboards and TABLEAU reports. 

Skills


Technology

Hadoop Ecosystem/J2SE/J2EE/JDK1.7, 1.8/ Database


Opersting Systems

Windows Vista/XP/NT/2000/ LINUX (Ubuntu, Cent OS), UNIX.


DBMS/Databases

ORACLE, My SQL, POSTGRE.


Programming Languages

Core Java, Python, Scala, Struts, Spring, Java Script


Big Data Ecosystem

HDFS, Map Reducing, Oozie, Hive, Pig, Sqoop, Flume, splunk, Scala, Spark, Zookeeper, Kafka and HBASE.


Methodologies

Agile scrum, Waterfall.


NoSql Databases

HBase, CASSANDRA, MONGODB


Experience

Reliance Techservices, Oct 2015 - Present

SR HADOOP/SPARK DEVELOPER
Responsibilities:

  • Understanding the data nature from different sources and designing the injection processes for HDFS
  • Developed Spark scripts by using PySpark shell commands as per the requirement.
  • Developed pig scripts to transform the data into structured format and automated through Oozie coordinators.
  • Collected data using SPARK STREAMING from Source in near-real-time and performs necessary Transformations and Aggregation on the fly to build the common learner data model and persists the data in HDFS.
  • Explored the usage of SPARK for improving the performance and optimization of the existing algorithms in HADOOP using SPARK CONTRXT, SPARK SQL and SPARK YARN.
  • Worked with SPARK STREAMING to ingest data into Spark Engine. 
  • Developed Spark Code using PySpark and SPARK-SQL/STREAMING for faster testing and processing of data.
  • Involved in converting Hive/SQL queries into Spark Transformations using SPARK RDDs and SCALA.
  • Worked on the SPARK SQL and Spark Streaming modules of SPARK and used Scala and PySpark to write code for all Spark use cases.
  • Extensive experience in using THE MOM WITH ACTIVE MQ, APACHE STORM, APACHE SPARK & KAFKA MAVEN AND ZOOKEEPER. 

Accenture,  July 2013 - Sep 2015

JAVA/HADOOP DEVELOPER
CLIENT: Mc Donalds
Responsibilities:

  • Worked extensively in creating MAPREDUCE jobs to power data for search and aggregation. Designed a data warehouse using HIVE.
  • Importing and exporting data into HDFS and HIVE using SQOOP.
  • Used Bash Shell Scripting, SQOOP, AVRO, Hive, Pig, Java, Map Reduce daily to develop ETL, batch processing, and data storage functionality.
  • Used PIG to do data transformations, event joins and some PRE-AGGREGATIONS before storing the data on the HDFS.
  • Exploited HADOOP MYSQL-Connector to store Map Reduce results in RDBMS.
  • Analyzed large amounts of data sets to determine optimal way to aggregate and report on it.
  • Worked on loading all tables from the reference source database schema through SQOOP. 
  •  Worked on designed, coded and configured server side J2EE components like JSP, AWS and JAVA.
  • Collected data from different databases (i.e. Oracle, MYSQL) to HADOOP.
  • Used OOZIE and Z00KEEPER for workflow scheduling and monitoring.
  • Worked on Designing and Developing ETL Workflows using Java for processing data in HDFS/HBASE using OOZIE.
  • Experienced in managing and reviewing HADOOP log files.
  • Involved in loading and transforming large sets of structured, semi structured and unstructured data from relational databases into HDFS using SQOOP imports.
  • Working on extracting files from MYSQL through SQOOP and placed in HDFS and processed.
  • Supported Map Reduce Programs those running on the cluster.
  • Cluster coordination services through ZOOKEEPER.
  • Involved in loading data from UNIX file system to HDFS.
  • Created several Hive tables, loaded with data and wrote Hive Queries in order to run internally in Map Reduce.
  • Developed Simple to complex Map Reduce Jobs using HIVE and PIG.
  • Involved in Analyzing system failures, identifying root causes, and recommended course of actions.
  • Used MAPREDUCE to process the large data sets over a cluster of computers using parallel processing. 
Environment: Apache Hadoop, MapReduce, Hdfs, Hive, Java, Sql, Pig, Flume, Zookeeper, Java (Jdk1.6), Flat Files, Oracle 11g/10g, MySql, Windows NT, Unix, Sqoop, Hive, Oozie, HBase.

Adroitent, May 2011 - June 2013

SQL/JAVA DEVELOPER

Responsibilities:

  • Developed physical data models and created DDL scripts to create database schema and database object.
  • Wrote user requirement documents based on functional specification.
  • Created new tables, written stored procedures, triggers for Application Developers and some user defined functions. Created SQL scripts for tuning and scheduling.
  • Involved in performing data conversions from flat files into a normalized database structure.
  • Developed source to target specifications for Data Transformation Services.
  • Developed functions, views and triggers for automation.
  • Extensively used Joins and sub-Queries to simplify complex queries involving multiple tables and also optimized the procedures and triggers to be used in production.
  • Performance Tuning in SQL Server 2000 using SQL Profiler and Data Loading.
  • Installing SQL Server Client side utilities and tools for all the front-end developers/programmers.
  •  Involved in performance tuning to optimize SQL queries using query analyzer.
  • Created indexes, Constraints and rules on database objects for optimization.
  • Creation/ Maintenance of Indexes for various fast and efficient reporting processes.
  • Monitored the Database Growth and Space Requirement. Handling Users/Logins/User Rights.
  • Managing historical data from various heterogeneous data sources (i.e. Excel, Access).
  • Designed the front-end user interface using JSP, Servlets, jQuery, AJAX, JavaScript, CSS.
  • Strong understanding of JavaScript, its quirks, and workarounds.
  • Proficient understanding of cross-browser compatibility issues and ways to work around such issues.
  • Worked with a variety of issues involving multithreading, server connectivity and user interface.
  • Design of application components using Java Collections and providing concurrent database access using multithreading.

Education Qualification

B.Tech CSE 2005 - 2009 from Nova College of Engineering

INTERMEDIATE MPC 2003 -05 from Sri Chathanya Educational Institute

SSC 2002 - 03 from St.joseph High School


Resume
Profile
IMG_7370.JPG

Mallikarjuna Reddy G

6+ years of IT experience in Architecture, Analysis, design, development, implementation, maintenance and support with experience in developing strategic methods for deploying big data technologies to efficiently solve Big Data processing requirement.
 Around 3+ years of experience on BIG DATA using HADOOP framework and related technologies such as HDFS, HBASE, MAPREDUCE, SPARK, HIVE, PIG, FLUME, OOZIE, SQOOP and ZOOKEEPER.

Summary

Excellent understanding /knowledge on HADOOP(Gen-1 and Gen-2) and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node, Resource Manager (YARN), Node Manager and Aplication Master. 

Expert in understanding the data and designing/implementing the enterprise platforms like Hadoop data lake and huge Data warehouses.

Have over 2 years of experience as Hadoop Architect with very good exposure on Hadoop Technologies like HDFS, YARN, MapReduce, Sqoop, Flume, HBase, Hive, Presto, Oozie and Spark.

Good understanding of NoSQL databases and hands on working experience in writing applications on NoSQL 
databases like HBase, Cassandra and MongoDB.
Knowledge on HADOOP security services like RANGER and KNOX configured over HDP cluster and have sufficient experience on Application Server TOMCAT and Web Server APACHE.

Played a vital role in Launching spark on Yarn and pretty good knowledge on spark configuration and monitoring Scheduled jobs.
Developed couple of spark applications with the help of PySpark, SPARK SQL and DATA FRAME API Not only SPARK SQL but also great hands of expertise on SPARK STREAMING.
Worked on Spark Jobs In-order to do Tuning and Optimizing an end to end BENCHMARK which includes setting up different Conf and LEVERAGING Spark SPECULATION to Identify and Re-Schedule Slow Running Tasks.
Developed analytical components using PySpark, SCALA, SPARK, STORM and SPARK STREAM. 
Worked extensively with DIMENSIONAL MODELING, DATA MIGRATION, DATA CLEANSING, DATA PROFILING, and ETL PROCESS features for data warehouses. 
Used UNIX/Linux shell scripting to automate system administration tasks, system backup/restore management and user account management.
Worked on data load from various sources i.e., Oracle, My-SQL, DB2, My-SQL SERVER to CASSANDRA, MONGO DB, HADOOP using SQOOP and PYTHON SCRIPT. 
Knowledge in creating the TABLEAU dashboards with relational and multi-dimensional databases including Oracle, MYSQL and HIVE, gathering and manipulating data from various sources. 
Having experience in Mark-Logic architecture and design, in performance tuning, dashboards and TABLEAU reports. 

Skills


Technology

Hadoop Ecosystem/J2SE/J2EE/JDK1.7, 1.8/ Database


Opersting Systems

Windows Vista/XP/NT/2000/ LINUX (Ubuntu, Cent OS), UNIX.


DBMS/Databases

ORACLE, My SQL, POSTGRE.


Programming Languages

Core Java, Python, Scala, Struts, Spring, Java Script


Big Data Ecosystem

HDFS, Map Reducing, Oozie, Hive, Pig, Sqoop, Flume, splunk, Scala, Spark, Zookeeper, Kafka and HBASE.


Methodologies

Agile scrum, Waterfall.


NoSql Databases

HBase, CASSANDRA, MONGODB


Experience

Reliance Techservices, Oct 2015 - Present

SR HADOOP/SPARK DEVELOPER
Responsibilities:

  • Understanding the data nature from different sources and designing the injection processes for HDFS
  • Developed Spark scripts by using PySpark shell commands as per the requirement.
  • Developed pig scripts to transform the data into structured format and automated through Oozie coordinators.
  • Collected data using SPARK STREAMING from Source in near-real-time and performs necessary Transformations and Aggregation on the fly to build the common learner data model and persists the data in HDFS.
  • Explored the usage of SPARK for improving the performance and optimization of the existing algorithms in HADOOP using SPARK CONTRXT, SPARK SQL and SPARK YARN.
  • Worked with SPARK STREAMING to ingest data into Spark Engine. 
  • Developed Spark Code using PySpark and SPARK-SQL/STREAMING for faster testing and processing of data.
  • Involved in converting Hive/SQL queries into Spark Transformations using SPARK RDDs and SCALA.
  • Worked on the SPARK SQL and Spark Streaming modules of SPARK and used Scala and PySpark to write code for all Spark use cases.
  • Extensive experience in using THE MOM WITH ACTIVE MQ, APACHE STORM, APACHE SPARK & KAFKA MAVEN AND ZOOKEEPER. 

Accenture,  July 2013 - Sep 2015

JAVA/HADOOP DEVELOPER
CLIENT: Mc Donalds
Responsibilities:

  • Worked extensively in creating MAPREDUCE jobs to power data for search and aggregation. Designed a data warehouse using HIVE.
  • Importing and exporting data into HDFS and HIVE using SQOOP.
  • Used Bash Shell Scripting, SQOOP, AVRO, Hive, Pig, Java, Map Reduce daily to develop ETL, batch processing, and data storage functionality.
  • Used PIG to do data transformations, event joins and some PRE-AGGREGATIONS before storing the data on the HDFS.
  • Exploited HADOOP MYSQL-Connector to store Map Reduce results in RDBMS.
  • Analyzed large amounts of data sets to determine optimal way to aggregate and report on it.
  • Worked on loading all tables from the reference source database schema through SQOOP. 
  •  Worked on designed, coded and configured server side J2EE components like JSP, AWS and JAVA.
  • Collected data from different databases (i.e. Oracle, MYSQL) to HADOOP.
  • Used OOZIE and Z00KEEPER for workflow scheduling and monitoring.
  • Worked on Designing and Developing ETL Workflows using Java for processing data in HDFS/HBASE using OOZIE.
  • Experienced in managing and reviewing HADOOP log files.
  • Involved in loading and transforming large sets of structured, semi structured and unstructured data from relational databases into HDFS using SQOOP imports.
  • Working on extracting files from MYSQL through SQOOP and placed in HDFS and processed.
  • Supported Map Reduce Programs those running on the cluster.
  • Cluster coordination services through ZOOKEEPER.
  • Involved in loading data from UNIX file system to HDFS.
  • Created several Hive tables, loaded with data and wrote Hive Queries in order to run internally in Map Reduce.
  • Developed Simple to complex Map Reduce Jobs using HIVE and PIG.
  • Involved in Analyzing system failures, identifying root causes, and recommended course of actions.
  • Used MAPREDUCE to process the large data sets over a cluster of computers using parallel processing. 
Environment: Apache Hadoop, MapReduce, Hdfs, Hive, Java, Sql, Pig, Flume, Zookeeper, Java (Jdk1.6), Flat Files, Oracle 11g/10g, MySql, Windows NT, Unix, Sqoop, Hive, Oozie, HBase.

Adroitent, May 2011 - June 2013

SQL/JAVA DEVELOPER

Responsibilities:

  • Developed physical data models and created DDL scripts to create database schema and database object.
  • Wrote user requirement documents based on functional specification.
  • Created new tables, written stored procedures, triggers for Application Developers and some user defined functions. Created SQL scripts for tuning and scheduling.
  • Involved in performing data conversions from flat files into a normalized database structure.
  • Developed source to target specifications for Data Transformation Services.
  • Developed functions, views and triggers for automation.
  • Extensively used Joins and sub-Queries to simplify complex queries involving multiple tables and also optimized the procedures and triggers to be used in production.
  • Performance Tuning in SQL Server 2000 using SQL Profiler and Data Loading.
  • Installing SQL Server Client side utilities and tools for all the front-end developers/programmers.
  •  Involved in performance tuning to optimize SQL queries using query analyzer.
  • Created indexes, Constraints and rules on database objects for optimization.
  • Creation/ Maintenance of Indexes for various fast and efficient reporting processes.
  • Monitored the Database Growth and Space Requirement. Handling Users/Logins/User Rights.
  • Managing historical data from various heterogeneous data sources (i.e. Excel, Access).
  • Designed the front-end user interface using JSP, Servlets, jQuery, AJAX, JavaScript, CSS.
  • Strong understanding of JavaScript, its quirks, and workarounds.
  • Proficient understanding of cross-browser compatibility issues and ways to work around such issues.
  • Worked with a variety of issues involving multithreading, server connectivity and user interface.
  • Design of application components using Java Collections and providing concurrent database access using multithreading.

Education Qualification

B.Tech CSE 2005 - 2009 from Nova College of Engineering

INTERMEDIATE MPC 2003 -05 from Sri Chathanya Educational Institute

SSC 2002 - 03 from St.joseph High School