Summary
Overview
Work History
Education
Skills
Timeline
Generic
SRIDHAR RAO.

SRIDHAR RAO.

Mechelen

Summary

Having 14 plus years of ICT experience in Enterprise application development, Infrastructure support and maintenance. Five plus years of experience in Big Data Ecosystem. Focus on innovation that can turn smart Ideas into successful business model. In depth understanding and knowledge on Hadoop Architecture and its components. Deep understanding of machine learning algorithms and ‘R’ programming. Expertise in coding Hadoop Jobs for analyzing data using Python, MapReduce,Hive. Experience in Scala using spark streaming and Akka for near real-time transactional datasets. Experienced in extending Hive and Pig core functionality by writing custom UDFs using Java. Experience in developing MapReduce (YARN) jobs for cleaning, accessing and validating the data. Experienced in Different Distributions like Cloudera, Hortonworks and MapR. Good in problem analysis and solving skills. Experience with developing large-scale distributed applications. Expertise in deployment and operating of Hadoop, Yarn, Spark and Storm integration withCassandra, ignite and RabbitMQ, Kafka. Experienced in NoSQL databases such as HBase, Cassandra and MongoDB. Experienced in designing, built, and deploying a multitude applications leveraging the Amazon AWS stack (Including EC2, S3), focusing on high-availability, fault tolerance, and auto-scaling. Experience with Amazon Web Services, AWS command line interface, and AWS data pipeline. Experienced in Business Objects design and implementation using pragmatic approach. 2 SME in implementing advanced procedures like text analytics and processing using the in-memory computing capabilities like Apache Spark / Scala Knowledge on importing and exporting data using Flume and Kafka. Expertise in testing complex Business rules created by mapping and various transformations using Informatica and other ETL tools. Expert in versioning tools/Agile/Kanban board – Git, GitHub, Bitbucket and JIRA Versatile and quick adaptability to work under dynamic landscape. Good presentation skills using Microsoft PowerPoint and Excel. Good team player with mentorship skills.

Overview

19
19
years of professional experience

Work History

BigData Engineer

  • Written Map Reduce code to process and parsing the data from various sourcesand storing parsed data into HBase and Hive using HBase-Hive Integration
  • Experienced in Migrating data of file sources and Mount sources from RDMS systemto
  • Hadoop using by using sqoop
  • Exporting the data using Sqoop to RDBMS servers and processed that data for ETL operations
  • Developed Pig Latin scripts to extract the data from the webserver output files to load into HDFS
  • Experienced in creating data pipeline integrating kafka with spark streaming application used scala for writing applications
  • Used sparkSQL for reading data from external sources and processes the the data using
  • Scala computation framework
  • Designing ETL Data Pipeline flow to ingest the data from RDBMS source to Hadoop using shell script, sqoop, package and mysql
  • Used Pig as ETL tool to do transformations, event joins and some pre-aggregations before storing the data onto HDFS
  • Developed workflow in Oozie to automate the tasks of loading the data into HDFSand pre-processing with Pig
  • Involved in developing Shell scripts to orchestrate execution of all other scripts(Pig,
  • Hive, and MapReduce) and move the data files within and outside of HDFS
  • Handled importing of data from various data sources, performed transformationsusing
  • Hive, MapReduce, loaded data into HDFS and Extracted the data from Oracle into HDFS using Sqoop
  • Worked in transforming data from HBase to Hive as bulk operations
  • Implemented POC to migrate map reduce jobs into Spark RDD transformations
  • Used spark for real-time batch processing
  • Active member for developing POC on streaming data using Apache Kafka and Spark
  • Streaming
  • Technology: Java, Hadoop, Mapreduce, Hive, Pig, Hbase, Cassandra, Flume, Spark, Storm, Rabbit
  • MQ, Active MQ, Sqoop, Accurev, Zookeeper, Oozie, Autosys, shell scripting.

Big Data Principal Consultant

Telenet BVBA, Liberty Global subsidiary
London
05.2019 - Current
  • Big data SME (Subject matter expert)
  • Responsible for designing Hadoop clusters Translation of functional and technical requirements into detailed architecture and design
  • Derive insights into key metrics (KPIs) and performance
  • Work closely with business and cross-functional teams
  • Align team to Agile maturity model focusing on business deliverables and objectives
  • Provide periodic “state-of-the-union” reports to Management for better planning and co-ordination
  • Design scalable and sustenance solutions for long-term business results and ROI
  • Build CI/CD pipeline applications for BigData stack and improve time-to-market index
  • To mature BigData/Kafka as product and increase the consumption and usage of Data within organization business groups
  • Provide constructive recommendations on optimal performance of BigData ecosystem components.

Assistant Vice President, Sr.BigData Engineer

JPMorgan Chase & Co
01.2018 - 05.2019
  • Facilitated insightful daily analysis of two petabyte of data for internal processing
  • Provide timely delivery of reports and business data for internal consumption
  • Developed MapReduce programs to parse the raw data, populate staging tables and store the refined data in partitioned tables in the EDW
  • Created Hive queries that helped market analysts spot emerging trends by comparing fresh data with EDW reference tables and historical metrics
  • Enabled speedy reviews and first mover advantages by using Oozie to automate data loading into the Hadoop Distributed File System and PIG to pre-process the data
  • Provided design recommendations and thought leadership to sponsors/stakeholders that improved review processes and resolved technical problems
  • Managed and reviewed Hadoop log files
  • Tested raw data and executed performance scripts
  • Shared responsibility for administration of Hadoop, Hive and Pig
  • Technology stack & tools: Spark, Spark Streaming, Akka, Kafka, Flume, Hive, Hbase, Scala, Java,
  • Pig, Map Reduce, Zookeeper, Oozie.

Associate

JPMorgan Chase & Co
01.2014 - 12.2017

Analyst

JPMorgan Chase & Co
06.2011 - 12.2013
  • Fine tune performance and ensure high availability of infrastructure
  • Design and develop infrastructure monitoring and reporting tools
  • Develop and maintain configuration management solutions
  • Develop test automation frameworks in collaboration with rest of the team
  • Create tools to help teams make the most out of the available infrastructure
  • Familiarity with Linux scripting languages – BASH/SHELL
  • Experience installing, configuring, and maintaining services such as Bind, Apache,
  • MySQL, nginx, etc
  • Strong grasp on configuration management tools, such as Puppet and Familiarity with load balancing, firewalls, etc
  • Proficient with network tools such as iptables, Linux IPVS, HAProxy, etc
  • Ability to build and monitor services on production servers

Lead, Operations

Yodlee Inc
11.2007 - 03.2011
  • Responsible for handling UNIX servers and related infrastructure issues
  • Responsible for engaging SWAT team to mitigate P1Sx incidents
  • Provide to provide level #3 support for Application and UNIX related issues
  • Provide brief documentation of project implementations and efficiencies achieved
  • Mentoring peers to outperform in their role
  • Consistent performer across the board.

Infrastructure Engineer

Eigenvalue Technologies
06.2004 - 10.2007
  • Responsible for handling Unix/windows servers and related infrastructure issues
  • Responsible for maintaining uptime of unix/windows servers
  • Handled initiatives to implement and practice industry best technology
  • Provide constructive suggestions and solutions to frequent issues
  • Personally contributed in configuring Nagios monitoring tool for organization, which received high visibility and appreciation from management
  • Consistent performer during entire service.

Education

Bachelor of Engineering - Electronics and Communication

and Certification ITIL v3 certified – AXELOS - undefined

Applied for Amazon web services (AWS) DevOps certification - undefined

Skills

Big Data Ecosystem

Timeline

Big Data Principal Consultant

Telenet BVBA, Liberty Global subsidiary
05.2019 - Current

Assistant Vice President, Sr.BigData Engineer

JPMorgan Chase & Co
01.2018 - 05.2019

Associate

JPMorgan Chase & Co
01.2014 - 12.2017

Analyst

JPMorgan Chase & Co
06.2011 - 12.2013

Lead, Operations

Yodlee Inc
11.2007 - 03.2011

Infrastructure Engineer

Eigenvalue Technologies
06.2004 - 10.2007

BigData Engineer

Bachelor of Engineering - Electronics and Communication

and Certification ITIL v3 certified – AXELOS - undefined

Applied for Amazon web services (AWS) DevOps certification - undefined

SRIDHAR RAO.