Summary
Overview
Work History
Education
Skills
Websites
Certification
Languages
Interests
Technical Summary
Timeline
Generic

PRASHANTH JAKKULA

Saint Paul

Summary

Results-driven DevOps and SRE Engineer with 8 years of experience in on-call production support and AWS engineering. Proven expertise in cloud build and release management, testing, automation, Linux administration, and configuration management, with a strong emphasis on continuous integration and continuous deployment of software applications. Skilled in integrating Java, .NET, and Python applications to optimize deployment processes. Committed to enhancing operational efficiency and delivering high-quality solutions in dynamic environments.

Overview

14
14
years of professional experience
1
1
Certification

Work History

Production Application Support/Dev-Ops SRE Engineer

HBO
11.2019 - Current
  • Company Overview: HBO has multiple applications under current team manage them in production to be available every day to the user/clients. These are the legacy Java applications running in production live and used by clients.
  • Technology stack involves Angular7, HTML5, CSS, Java8, Microservices, Spring Boot, Maven, Bamboo as build and deploy, GIT bitbucket as repository etc. and also CI/CD to deploy the latest application in bamboo.
  • Implemented AWS solutions using EC2, EKS, S3, RDS, EBS, Elastic Load Balancer, DynamoDB, Lambda, Redshift, RDS, Route53, Cloud Formation, Cloud Foundry, Auto Scaling groups.
  • Strengthening security by implementing and maintaining Network Address Translation in the company’s network.
  • Built templates to create custom sized VPC, subnets, NAT, IGW, Route Tables, ECS, ALB, ELB, Lambda, S3, buckets, Cloud Front, Security groups to ensure successful deployment of Web applications and database templates on AWS.
  • Experience in Private Cloud and Hybrid cloud configurations, patterns, and practices in Windows Azure and SQL Azure and in Azure web and database deployments.
  • Used SQL Server Integration Services (SSIS) to import the Logs data into SQL Server.
  • Worked in dealing with Windows Azure IaaS - Virtual Networks, Linux, Virtual Machines, Cloud Services, Resource Groups, Express Route, Traffic Manager, VPN, Routers, AD, Load Balancing, Application Gateways, and Auto-Scaling.
  • Worked on building and using REST API’s.
  • Worked on installing and configuring of ForgeRock OpenAM, OpenIDM, OpenIG, OpenDJ.
  • Experience on PowerShell scripts to automate the Azure Cloud system creation including end-to-end infrastructure, VM’s, storage, Azure firewall rules.
  • Hands on Experience to create different Templates of ARM under the platform of Azure.
  • Create and maintain highly scalable and fault tolerant multi-tier AWS and Azure environments spanning across multiple availability zones using Terraform and CloudFormation.
  • 24/7 production support of the application. Provide production support for client in incident and issue resolution overnight.
  • Monitoring and dashboard platforms such as Grafana/Datadog and Prometheus.
  • Experience in designing and implementing REST based Web Service API(s) in a transaction processing environment.
  • Worked with product managers to drive strategic value through custom applications built on Salesforce, as well as our internal software stack and implementing salesforce API’s in applications.
  • Developed microservice on boarding tools leveraging Python and Jenkins allowing for easy creation and maintenance of build jobs and Kubernetes deploy and services.
  • Worked on deploying and running distributed systems at enterprise organizations and also provide substantial feedback on distributed system designs.
  • Building and Maintaining Docker Container clusters managed by Kubernetes Linux, Bash, GIT, Docker, on GCP (Google Cloud Platform).
  • Worked on google cloud platform (GCP) services like compute engine, cloud load balancing, cloud storage, cloud SQL, stack driver monitoring and cloud deployment manager.
  • Setup GCP Firewall rules to allow or deny traffic to & from the VM’s instances based on specified configuration, used GCP cloud CDN to deliver content from GCP cache, drastically improving user experience & latency.
  • Responsible for handling the PagerDuty on incident management support, Pingdom and On-call support for DevOps incidents.
  • POC on GCP's VPC using Firewall Rules, Routes, Cloud Security, External IP Addressing, Load Balancers, Cloud DNS, CDN on GCP.
  • Created a Python script with Boto3, to scan through AWS environments and look for untagged instances and parsed to a monitoring tool (Nagios) to warn, when any asset is left untagged.
  • Providing technical support for the database and application activities in Production, Development environments that includes design, installation, implementation, operation and maintenance of Oracle 11g, Mongo DB.
  • Working on RHEL Linux server administration on-premises servers with file transfer tools SFTP WinSCP, SSH, Linux CLI.
  • Implemented multiple CI/CD pipelines as a part of DevOps role for on-premises and cloud-based software using Jenkins, Chef and AWS/Docker.
  • Implemented and Installed Ansible configuration management system. Used Ansible to manage Web application, Environment configuration Files, Users, Mount point and packages.
  • Successfully migrated legacy application to use CI/CD pipeline which includes maven, ant applications.
  • Also migrated legacy applications deployed on WebLogic 10c, 12c.
  • Deployed and Tested code on Apache Tomcat Server in both Local and Dev Environments.
  • Used Ant build scripts to build and deploy the application.
  • Leverage Django scripting, network management and orchestration tools to automate the transformation of the network.
  • Used Maven for dependency management, build applications and deploy to the containers, application servers and create versions in the repository.
  • Fixing bugs in New Developing Application.
  • DevOps Bamboo setup which is migrating legacy applications into automatic build and deploy methods.
  • Monitoring Tidal jobs (automated job scheduler) and assisting the issue with resolution by coordinating with various teams.
  • Worked on WebLogic, GIT as repository and other GIT Stash, Confluence and Jira tools.
  • Proficient in Jira to track the issues and close them after resolution.
  • Good knowledge on GIT bash commands to import code and push changes to bitbucket to GitHub accordingly.
  • I am involved in Scrum Agile meetings to stand up calls, which brings me with full knowledge on Scrum.
  • Resolving Offshore resources issues of legacy application and working on new applications. Thus, developing onsite offshore coordination.
  • Experience in Monitoring tools like SiteScope, AppDynamics (application performance management), Splunk.
  • HBO has multiple applications under current team manage them in production to be available every day to the user/clients. These are the legacy Java applications running in production live and used by clients.
  • Environment: Azure, Google Cloud Platform (GCP), AWS, EKS, Ansible, REST API’s, ForgeRock OpenAM, OpenIG, Grafana, MongoDB, Prometheus, Chef, Python, Django, Jenkins, RHEL, SFTP, WinSCP, SSH, Kubernetes, Terraform, Salesforce, Cloud watch, Docker, GIT, OpenShift, Red Hat Linux, shell scripting, Docker swarm, Nagios, Splunk, Maven, Agile/SCRUM, ANT, Elastic Search, Logstash, Kibana, Apache-web server, Tomcat, Jfrog Artifactory, Jira, Ruby, Shell scripting.

Production Application Support/Dev-Ops SRE Engineer

HBO
11.2019 - Current
  • Company Overview: HBO has multiple applications under current team manage them in production to be available every day to the user/clients. These are the legacy Java applications running in production live and used by clients.
  • Technology stack involves Angular7, HTML5, CSS, Java8, Microservices, Spring Boot, Maven, Bamboo as build and deploy, GIT bitbucket as repository etc. and also CI/CD to deploy the latest application in bamboo.
  • Implemented AWS solutions using EC2, EKS, S3, RDS, EBS, Elastic Load Balancer, DynamoDB, Lambda, Redshift, RDS, Route53, Cloud Formation, Cloud Foundry, Auto Scaling groups.
  • Strengthening security by implementing and maintaining Network Address Translation in the company’s network.
  • Built templates to create custom sized VPC, subnets, NAT, IGW, Route Tables, ECS, ALB, ELB, Lambda, S3, buckets, Cloud Front, Security groups to ensure successful deployment of Web applications and database templates on AWS.
  • Experience in Private Cloud and Hybrid cloud configurations, patterns, and practices in Windows Azure and SQL Azure and in Azure web and database deployments.
  • Used SQL Server Integration Services (SSIS) to import the Logs data into SQL Server.
  • Worked in dealing with Windows Azure IaaS - Virtual Networks, Linux, Virtual Machines, Cloud Services, Resource Groups, Express Route, Traffic Manager, VPN, Routers, AD, Load Balancing, Application Gateways, and Auto-Scaling.
  • Worked on building and using REST API’s.
  • Worked on installing and configuring of ForgeRock OpenAM, OpenIDM, OpenIG, OpenDJ.
  • Experience on PowerShell scripts to automate the Azure Cloud system creation including end-to-end infrastructure, VM’s, storage, Azure firewall rules.
  • Hands on Experience to create different Templates of ARM under the platform of Azure.
  • Create and maintain highly scalable and fault tolerant multi-tier AWS and Azure environments spanning across multiple availability zones using Terraform and CloudFormation.
  • 24/7 production support of the application. Provide production support for client in incident and issue resolution overnight.
  • Monitoring and dashboard platforms such as Grafana/Datadog and Prometheus.
  • Experience in designing and implementing REST based Web Service API(s) in a transaction processing environment.
  • Worked with product managers to drive strategic value through custom applications built on Salesforce, as well as our internal software stack and implementing salesforce API’s in applications.
  • Developed microservice on boarding tools leveraging Python and Jenkins allowing for easy creation and maintenance of build jobs and Kubernetes deploy and services.
  • Worked on deploying and running distributed systems at enterprise organizations and also provide substantial feedback on distributed system designs.
  • Building and Maintaining Docker Container clusters managed by Kubernetes Linux, Bash, GIT, Docker, on GCP (Google Cloud Platform).
  • Worked on google cloud platform (GCP) services like compute engine, cloud load balancing, cloud storage, cloud SQL, stack driver monitoring and cloud deployment manager.
  • Setup GCP Firewall rules to allow or deny traffic to & from the VM’s instances based on specified configuration, used GCP cloud CDN to deliver content from GCP cache, drastically improving user experience & latency.
  • Responsible for handling the PagerDuty on incident management support, Pingdom and On-call support for DevOps incidents.
  • POC on GCP's VPC using Firewall Rules, Routes, Cloud Security, External IP Addressing, Load Balancers, Cloud DNS, CDN on GCP.
  • Created a Python script with Boto3, to scan through AWS environments and look for untagged instances and parsed to a monitoring tool (Nagios) to warn, when any asset is left untagged.
  • Providing technical support for the database and application activities in Production, Development environments that includes design, installation, implementation, operation and maintenance of Oracle 11g, Mongo DB.
  • Working on RHEL Linux server administration on-premises servers with file transfer tools SFTP WinSCP, SSH, Linux CLI.
  • Implemented multiple CI/CD pipelines as a part of DevOps role for on-premises and cloud-based software using Jenkins, Chef and AWS/Docker.
  • Implemented and Installed Ansible configuration management system. Used Ansible to manage Web application, Environment configuration Files, Users, Mount point and packages.
  • Successfully migrated legacy application to use CI/CD pipeline which includes maven, ant applications.
  • Also migrated legacy applications deployed on WebLogic 10c, 12c.
  • Deployed and Tested code on Apache Tomcat Server in both Local and Dev Environments.
  • Used Ant build scripts to build and deploy the application.
  • Leverage Django scripting, network management and orchestration tools to automate the transformation of the network.
  • Used Maven for dependency management, build applications and deploy to the containers, application servers and create versions in the repository.
  • Fixing bugs in New Developing Application.
  • DevOps Bamboo setup which is migrating legacy applications into automatic build and deploy methods.
  • Monitoring Tidal jobs (automated job scheduler) and assisting the issue with resolution by coordinating with various teams.
  • Worked on WebLogic, GIT as repository and other GIT Stash, Confluence and Jira tools.
  • Proficient in Jira to track the issues and close them after resolution.
  • Good knowledge on GIT bash commands to import code and push changes to bitbucket to GitHub accordingly.
  • I am involved in Scrum Agile meetings to stand up calls, which brings me with full knowledge on Scrum.
  • Resolving Offshore resources issues of legacy application and working on new applications. Thus, developing onsite offshore coordination.
  • Experience in Monitoring tools like SiteScope, AppDynamics (application performance management), Splunk.
  • HBO has multiple applications under current team manage them in production to be available every day to the user/clients. These are the legacy Java applications running in production live and used by clients.
  • Environment: Azure, Google Cloud Platform (GCP), AWS, EKS, Ansible, REST API’s, ForgeRock OpenAM, OpenIG, Grafana, MongoDB, Prometheus, Chef, Python, Django, Jenkins, RHEL, SFTP, WinSCP, SSH, Kubernetes, Terraform, Salesforce, Cloud watch, Docker, GIT, OpenShift, Red Hat Linux, shell scripting, Docker swarm, Nagios, Splunk, Maven, Agile/SCRUM, ANT, Elastic Search, Logstash, Kibana, Apache-web server, Tomcat, Jfrog Artifactory, Jira, Ruby, Shell scripting.

DevOps/AWS Engineer

Amtrak
08.2018 - 10.2019
  • Company Overview: The National Railroad Passenger Corporation, Amtrak, is a corporation striving to deliver a high quality, safe, on-time rail passenger service that exceeds customer expectations.
  • Developed build and deployment processes for Pre-production environments.
  • Writing Manifests/Modules for Installation and Updating of Yum repositories on the Server using Puppet infrastructure.
  • Configured Route 53 by using CFT templates, assigned the DNS mapping for the AWS servers and trouble-shoot the issues of the load balancer's, auto scaling groups and Route 53.
  • Implemented a Continuous Delivery pipeline with Docker, Kubernetes, Elastic Search, Jenkins and GitHub and AWS AMI's.
  • Secured the GCP (Google Cloud Platform) infrastructure with private, public subnets as well as security groups etc., and leveraged the GCP cloud services such as compute, auto-scaling and VPC to build secure, scalable systems to handle the unexpected loads.
  • Set up a GCP Firewall rules to allow or deny traffic to and from the VM’s instances based on specified configuration and used GCP cloud CDN to deliver content from GCP cache locations drastically improving user experience and latency.
  • Automated Weekly releases with Maven scripting for Compiling Java Code, Debugging and Placing Builds into Maven Repository.
  • Build integrations between Salesforce and other systems using REST API’s, SOAP and Bulk APIs. Then build, test and deploy salesforce applications. Migrate changes from development to test to production environment(s) using CI technologies like Salesforce DX, Jenkins and ANT.
  • Monitoring setup and dashboard creation using Grafana, Elastic Search, Prometheus, Graylog and Splunk.
  • Installed Configured and Upgraded Red Hat Enterprise Linux 4.x to 5.x and configured various patching schedules to meet patching criteria quarterly.
  • Developed automation scripting in Shell using Puppet to deploy and manage Java applications across Linux servers.
  • Azure Incident Management. Co-ordinate incidents requiring multi-vendor engagement, drive effective and efficient Incident Management process to ensure timely service restoration and resolution of incidents.
  • Used Puppet to automate Configuration management and to manage Web Applications, Config Files, Data Base, Commands, Users Mount Points and Packages.
  • Experience writing puppet manifests for apache installation and configuration as well as for various deployments.
  • 24/7 on call production support. AWS automation through Puppet and Ansible environment.
  • Used Docker coupled with load-balancing tool Nginx to achieve Continuous Delivery goal on high scalable environment.
  • Experience in designing and deploying AWS Solutions using EC2, EKS, S3, EBS, Elastic Load balancer (ELB), auto scaling groups.
  • Containerization of Web application using Docker and Kubernetes and Database maintenance.
  • Involved in writing parent POM files to establish the code quality tools integration.
  • Collaborated with development support teams to setup a continuous delivery environment with the use of Docker.
  • Involved installing and managing different automation and monitoring tools on Red hat Linux like Nagios, Splunk and Puppet.
  • Used Kubernetes as a open source platform for automating deployment, scaling and operations of applications containers across clusters of hosts, providing container centric infrastructure.
  • Kubernetes to deploy applications quickly and predictably.
  • Used ServiceNow as a CRM for the release management and incident management and change management processes.
  • Docker can be integrated into various tools like AWS, Puppet, Vagrant, Jenkins & VMware containers.
  • Developed and implemented Software Release Management strategies for various applications in the agile process.
  • Experience migrating SVN repositories to GIT.
  • Developed automation scripting in Python (core) using Puppet to deploy and manage Java applications across Linux servers.
  • Configured and installed monitoring tools Grafana, Kibana, Log stash and Elastic Search on the servers.
  • Automated the cloud deployments using Puppet, python (boto& fabric) and AWS Cloud Formation Templates.
  • Business data analysis using Big Data tools like Splunk, ELK.
  • Configured SonarQube code quality tool and integrated it with Jenkins. Implemented SonarQube to analyze code quality metrics, to verify the coding standards and setup quality gates to allow/fail builds as per requirement.
  • Created and tracked the release improvement process to be applied across all IT domains and initiates new projects related to release management.
  • Releasing code to testing regions or staging areas according to the schedule published.
  • Participated in all Product Release and Patches.
  • The National Railroad Passenger Corporation, Amtrak, is a corporation striving to deliver a high quality, safe, on-time rail passenger service that exceeds customer expectations.
  • Environment: AWS, EKS, RTC, SVN(Subversion), GCP (Google Cloud Platform), Anthill Pro, REST API’s, Elastic Search, Grafana, Salesforce DX, Salesforce applications, Prometheus, ANT, and Maven, Puppet, Jenkins, Clear case, Unix, Linux, Perl, Python, Ruby, AWS, Node JS, Bamboo, Hudson, Git, JIRA, Shell Script, WebLogic.

DevOps/AWS Engineer

Amtrak
08.2018 - 10.2019
  • Company Overview: The National Railroad Passenger Corporation, Amtrak, is a corporation striving to deliver a high quality, safe, on-time rail passenger service that exceeds customer expectations.
  • Developed build and deployment processes for Pre-production environments.
  • Writing Manifests/Modules for Installation and Updating of Yum repositories on the Server using Puppet infrastructure.
  • Configured Route 53 by using CFT templates, assigned the DNS mapping for the AWS servers and trouble-shoot the issues of the load balancer's, auto scaling groups and Route 53.
  • Implemented a Continuous Delivery pipeline with Docker, Kubernetes, Elastic Search, Jenkins and GitHub and AWS AMI's.
  • Secured the GCP (Google Cloud Platform) infrastructure with private, public subnets as well as security groups etc., and leveraged the GCP cloud services such as compute, auto-scaling and VPC to build secure, scalable systems to handle the unexpected loads.
  • Set up a GCP Firewall rules to allow or deny traffic to and from the VM’s instances based on specified configuration and used GCP cloud CDN to deliver content from GCP cache locations drastically improving user experience and latency.
  • Automated Weekly releases with Maven scripting for Compiling Java Code, Debugging and Placing Builds into Maven Repository.
  • Build integrations between Salesforce and other systems using REST API’s, SOAP and Bulk APIs. Then build, test and deploy salesforce applications. Migrate changes from development to test to production environment(s) using CI technologies like Salesforce DX, Jenkins and ANT.
  • Monitoring setup and dashboard creation using Grafana, Elastic Search, Prometheus, Graylog and Splunk.
  • Installed Configured and Upgraded Red Hat Enterprise Linux 4.x to 5.x and configured various patching schedules to meet patching criteria quarterly.
  • Developed automation scripting in Shell using Puppet to deploy and manage Java applications across Linux servers.
  • Azure Incident Management. Co-ordinate incidents requiring multi-vendor engagement, drive effective and efficient Incident Management process to ensure timely service restoration and resolution of incidents.
  • Used Puppet to automate Configuration management and to manage Web Applications, Config Files, Data Base, Commands, Users Mount Points and Packages.
  • Experience writing puppet manifests for apache installation and configuration as well as for various deployments.
  • 24/7 on call production support. AWS automation through Puppet and Ansible environment.
  • Used Docker coupled with load-balancing tool Nginx to achieve Continuous Delivery goal on high scalable environment.
  • Experience in designing and deploying AWS Solutions using EC2, EKS, S3, EBS, Elastic Load balancer (ELB), auto scaling groups.
  • Containerization of Web application using Docker and Kubernetes and Database maintenance.
  • Involved in writing parent POM files to establish the code quality tools integration.
  • Collaborated with development support teams to setup a continuous delivery environment with the use of Docker.
  • Involved installing and managing different automation and monitoring tools on Red hat Linux like Nagios, Splunk and Puppet.
  • Used Kubernetes as a open source platform for automating deployment, scaling and operations of applications containers across clusters of hosts, providing container centric infrastructure.
  • Kubernetes to deploy applications quickly and predictably.
  • Used ServiceNow as a CRM for the release management and incident management and change management processes.
  • Docker can be integrated into various tools like AWS, Puppet, Vagrant, Jenkins & VMware containers.
  • Developed and implemented Software Release Management strategies for various applications in the agile process.
  • Experience migrating SVN repositories to GIT.
  • Developed automation scripting in Python (core) using Puppet to deploy and manage Java applications across Linux servers.
  • Configured and installed monitoring tools Grafana, Kibana, Log stash and Elastic Search on the servers.
  • Automated the cloud deployments using Puppet, python (boto& fabric) and AWS Cloud Formation Templates.
  • Business data analysis using Big Data tools like Splunk, ELK.
  • Configured SonarQube code quality tool and integrated it with Jenkins. Implemented SonarQube to analyze code quality metrics, to verify the coding standards and setup quality gates to allow/fail builds as per requirement.
  • Created and tracked the release improvement process to be applied across all IT domains and initiates new projects related to release management.
  • Releasing code to testing regions or staging areas according to the schedule published.
  • Participated in all Product Release and Patches.
  • The National Railroad Passenger Corporation, Amtrak, is a corporation striving to deliver a high quality, safe, on-time rail passenger service that exceeds customer expectations.
  • Environment: AWS, EKS, RTC, SVN(Subversion), GCP (Google Cloud Platform), Anthill Pro, REST API’s, Elastic Search, Grafana, Salesforce DX, Salesforce applications, Prometheus, ANT, and Maven, Puppet, Jenkins, Clear case, Unix, Linux, Perl, Python, Ruby, AWS, Node JS, Bamboo, Hudson, Git, JIRA, Shell Script, WebLogic.

Dev-Ops AWS Engineer

State Street Corporation
04.2017 - 07.2018
  • Company Overview: State Street is an American worldwide financial services company providing asset management, investment management and trading services to financial institutions.
  • Delivering application to various environments, deploying from lower environments to production environments.
  • Installed, Deployed Red Hat Enterprise Linux 6.x/7.x, CentOS and installation of packages and patches for Red Hat Linux servers.
  • Installed configured and Managed Monitoring tools such as Splunk, Nagios for Resource Monitoring/Network monitoring/Log Trace Monitoring.
  • Worked on Kubernetes charts using Helm by reproducible builds of the Kubernetes applications, managed Kubernetes manifest files.
  • Utilized Kubernetes and Docker for the runtime environment of the CI/CD system to build, test deploy. Launched Docker containers on EC2 instance, managing them in Kubernetes and Docker Swarm.
  • Knowledge in LINUX operating system, services & utilities like NFS/AutoFS, Samba, NTP, etc.
  • Worked with Splunk as monitoring tools and good understanding on Zabbix and Kibana.
  • Responsible for orchestrating CI/CD processes by responding to Git triggers, human input and dependency chains and environment setup and deployed CI/CD Pipelines.
  • Used Kubernetes / Docker Swarm for automating deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure.
  • Cont.… Designed and implemented CI (Continuous integration) system: Configuring Jenkins servers, Jenkins nodes, creating required scripts (Python) and creating/configuring in VMs (Windows/Linux).
  • Automatic build and deployment through Jenkins and deployment tools using image or version created by Jenkins.
  • Migrating applications to Dev, QA, UAT and Production environments.
  • Experience and involved in using Struts2, Spring & Hibernate framework for various web/portal-based application development.
  • Involved in the implementation calls of each release and provided post production support activities and also tracked DEV, Testing, Pre-prod and production environments.
  • Used CVS as a source control tool and Rational Team Concert (RTC) to track aspects of work tasks.
  • Used Jira tool for web application server that allows us to define jobs and tasks and also as ticket tracking, Change management and Agile/SCRUM tool.
  • State Street is an American worldwide financial services company providing asset management, investment management and trading services to financial institutions.
  • Environment: Agile, Scrum, Splunk, Nagios, Kubernetes, Docker Swarm, Helm, Puppet, Zabbix, Kibana, Jenkins, Maven, ANT, Ruby, Shell, Python, WebLogic server, Load Balancers, Apache Tomcat 7.x, Docker, GitHub, CloudWatch, XML, SVN, configured plug-ins for Apache, RedHat Linux, Centos, Solaris.

Dev-Ops AWS Engineer

State Street Corporation
04.2017 - 07.2018
  • Company Overview: State Street is an American worldwide financial services company providing asset management, investment management and trading services to financial institutions.
  • Delivering application to various environments, deploying from lower environments to production environments.
  • Installed, Deployed Red Hat Enterprise Linux 6.x/7.x, CentOS and installation of packages and patches for Red Hat Linux servers.
  • Installed configured and Managed Monitoring tools such as Splunk, Nagios for Resource Monitoring/Network monitoring/Log Trace Monitoring.
  • Worked on Kubernetes charts using Helm by reproducible builds of the Kubernetes applications, managed Kubernetes manifest files.
  • Utilized Kubernetes and Docker for the runtime environment of the CI/CD system to build, test deploy. Launched Docker containers on EC2 instance, managing them in Kubernetes and Docker Swarm.
  • Knowledge in LINUX operating system, services & utilities like NFS/AutoFS, Samba, NTP, etc.
  • Worked with Splunk as monitoring tools and good understanding on Zabbix and Kibana.
  • Responsible for orchestrating CI/CD processes by responding to Git triggers, human input and dependency chains and environment setup and deployed CI/CD Pipelines.
  • Used Kubernetes / Docker Swarm for automating deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure.
  • Cont.… Designed and implemented CI (Continuous integration) system: Configuring Jenkins servers, Jenkins nodes, creating required scripts (Python) and creating/configuring in VMs (Windows/Linux).
  • Automatic build and deployment through Jenkins and deployment tools using image or version created by Jenkins.
  • Migrating applications to Dev, QA, UAT and Production environments.
  • Experience and involved in using Struts2, Spring & Hibernate framework for various web/portal-based application development.
  • Involved in the implementation calls of each release and provided post production support activities and also tracked DEV, Testing, Pre-prod and production environments.
  • Used CVS as a source control tool and Rational Team Concert (RTC) to track aspects of work tasks.
  • Used Jira tool for web application server that allows us to define jobs and tasks and also as ticket tracking, Change management and Agile/SCRUM tool.
  • State Street is an American worldwide financial services company providing asset management, investment management and trading services to financial institutions.
  • Environment: Agile, Scrum, Splunk, Nagios, Kubernetes, Docker Swarm, Helm, Puppet, Zabbix, Kibana, Jenkins, Maven, ANT, Ruby, Shell, Python, WebLogic server, Load Balancers, Apache Tomcat 7.x, Docker, GitHub, CloudWatch, XML, SVN, configured plug-ins for Apache, RedHat Linux, Centos, Solaris.

Software Developer

Fidelity Investments
06.2015 - 03.2017
  • Company Overview: Fidelity Investments Inc., commonly referred to as Fidelity, earlier as Fidelity Management & Research or FMR, is an American multinational financial services corporation.
  • Agile development (2-week sprints/Iterations), Maven, JIRA issue navigator is part of every day’s work.
  • Involved in the part of an engineering team designated to design a new platform to host applications AWS.
  • Includes working on Cloud Technologies like ELASTIC BEANSTACK, VPC, EC2, Elastic File System, SNS, SES, S3, IAM, RDS, Route 53, CloudWatch and CloudTrail.
  • Responsible for user account administration in Active Directory, Exchange 2003/2007, unified Computing systems (UCS) servers.
  • Developed Chef recipes to configure, deploy and maintain software components of existing infrastructure, used Chef to manage applications, configure database, files, users and packages.
  • Experience in handling Chef cookbook recipes to automate installation of Middleware Infrastructure like Apache Tomcat, JDK and configuration tasks for new environments.
  • Automated the cloud Deployment using Chef, Python and AWS cloud formation templates. Used Chef for unattended bootstrapping in AWS.
  • Responsible for creation of design artifacts such as use cases and sequence diagrams.
  • Used GitHub to maintain the version of the files and took the responsibility to do the code merges from branch to trunk and creating new branch when new feature implementation starts.
  • Worked on automating the deployments for WebSphere Process Server and WebSphere Application Server applications.
  • Upgrading the application on Red hat Linux systems and managed Multipathing on RedHat and Solaris Using EMC power path and native multipathing software.
  • Managed release infrastructure and developing UI control tools to manage software release/deployment to all servers Jenkins, Git.
  • Experienced in SVN and Jenkins. Responsible for creating the project space in confluence.
  • Developed, Tested and Deployed application in Apache Tomcat 7.0 and in Web Logic AS.
  • Used Maven for building, deploying applications, creating JPA based entity objects and compiling GWT applications.
  • Developed and maintained scripts for deployment automation to multiple environments. Used Elastic Search, Kubernetes, Docker and Kafka.
  • Fidelity Investments Inc., commonly referred to as Fidelity, earlier as Fidelity Management & Research or FMR, is an American multinational financial services corporation.
  • Environment: AWS, Azure, Ansible, Maven, Git, Docker, SVN, Kubernetes, Tomcat, ELK (Elastic search, Kibana, Logstash), Junit, jBoss, WebLogic, Oracle, Jira, ANT, Ruby, Shell scripting, Splunk, Jenkins, Python, Chef, Tivoli, Docker Swarm, Apache, MySQL, Jfrog Artifactory, Kafka.

Software Developer

Fidelity Investments
06.2015 - 03.2017
  • Company Overview: Fidelity Investments Inc., commonly referred to as Fidelity, earlier as Fidelity Management & Research or FMR, is an American multinational financial services corporation.
  • Agile development (2-week sprints/Iterations), Maven, JIRA issue navigator is part of every day’s work.
  • Involved in the part of an engineering team designated to design a new platform to host applications AWS.
  • Includes working on Cloud Technologies like ELASTIC BEANSTACK, VPC, EC2, Elastic File System, SNS, SES, S3, IAM, RDS, Route 53, CloudWatch and CloudTrail.
  • Responsible for user account administration in Active Directory, Exchange 2003/2007, unified Computing systems (UCS) servers.
  • Developed Chef recipes to configure, deploy and maintain software components of existing infrastructure, used Chef to manage applications, configure database, files, users and packages.
  • Experience in handling Chef cookbook recipes to automate installation of Middleware Infrastructure like Apache Tomcat, JDK and configuration tasks for new environments.
  • Automated the cloud Deployment using Chef, Python and AWS cloud formation templates. Used Chef for unattended bootstrapping in AWS.
  • Responsible for creation of design artifacts such as use cases and sequence diagrams.
  • Used GitHub to maintain the version of the files and took the responsibility to do the code merges from branch to trunk and creating new branch when new feature implementation starts.
  • Worked on automating the deployments for WebSphere Process Server and WebSphere Application Server applications.
  • Upgrading the application on Red hat Linux systems and managed Multipathing on RedHat and Solaris Using EMC power path and native multipathing software.
  • Managed release infrastructure and developing UI control tools to manage software release/deployment to all servers Jenkins, Git.
  • Experienced in SVN and Jenkins. Responsible for creating the project space in confluence.
  • Developed, Tested and Deployed application in Apache Tomcat 7.0 and in Web Logic AS.
  • Used Maven for building, deploying applications, creating JPA based entity objects and compiling GWT applications.
  • Developed and maintained scripts for deployment automation to multiple environments. Used Elastic Search, Kubernetes, Docker and Kafka.
  • Fidelity Investments Inc., commonly referred to as Fidelity, earlier as Fidelity Management & Research or FMR, is an American multinational financial services corporation.
  • Environment: AWS, Azure, Ansible, Maven, Git, Docker, SVN, Kubernetes, Tomcat, ELK (Elastic search, Kibana, Logstash), Junit, jBoss, WebLogic, Oracle, Jira, ANT, Ruby, Shell scripting, Splunk, Jenkins, Python, Chef, Tivoli, Docker Swarm, Apache, MySQL, Jfrog Artifactory, Kafka.

Build Release Engineer

Param Technologies
11.2013 - 05.2015
  • Company Overview: Paramtech CAD Services PvtLtd, earlier known as “PARAM TECHNOLOGIES” is a leading Consultant, Engineering services provider to the Automotive, Off Highway and Industrial Manufacturing Industry.
  • Developed and implemented Software Release Management strategies for various applications according to the agile process.
  • Installed, Configured and Administered Hudson/Jenkins Continuous Integration Tool.
  • Developed build and deployment scripts using ANT and MAVEN as build tools in Jenkins to move from one environment to other environments.
  • Experience in the usage of data center automation and configuration management tools such as Ansible, Vagrant, Docker, etc.
  • Experienced with Windows, Linux/UNIX environments and scripting for Build & Release automation.
  • Developed automation framework for Application Deployments to the cloud environments.
  • Worked on Managing the Private Cloud Environment using Chef.
  • Performed Branching, Tagging, Release Activities on Version Control Tools: SVN, GIT.
  • Developed Perl and shell scripts for automation of the build and release process, developed Custom Scripts to monitor repositories, Server storage.
  • Automated the cloud deployments using chef, python (boto& fabric) and AWS Cloud Formation Templates.
  • Used Maven as build tool on Java projects for the development of build artifacts on the source code.
  • Deployed the Java applications into web application servers like JBoss.
  • Performed and deployed Builds for various Environments like QA, Integration, UAT and Productions Environments.
  • Worked on configuring the Jenkins to use MetaCase Software to build Java code and also to do the whole C.I process on the java code generated by MetaCase.
  • Experience with implementing project change control into software release management in multiple technical environments including UNIX, LINUX and Windows ansible.
  • Troubleshoot and resolved Build failures due to infrastructure issues reduced by 95% stabilizing the build process.
  • Setup and executed process to code review system effectively.
  • Troubleshoot Build and Deployment Issues, with little downtime.
  • Documented release metrics, software configuration process. Used Maven scripts to build the source code. Supported and helped to create Dynamic Views and Snapshot views for end users.
  • Paramtech CAD Services PvtLtd, earlier known as “PARAM TECHNOLOGIES” is a leading Consultant, Engineering services provider to the Automotive, Off Highway and Industrial Manufacturing Industry.
  • Environment: DevOps, Java, Ant, Maven, Jenkins, Hudson, Chef, Python, Perl, GIT, SVN, Apache Webserver, JBoss, Apache JMETER, MetaCase, GIT, SVN, Windows.

Build Release Engineer

Param Technologies
11.2013 - 05.2015
  • Company Overview: Paramtech CAD Services PvtLtd, earlier known as “PARAM TECHNOLOGIES” is a leading Consultant, Engineering services provider to the Automotive, Off Highway and Industrial Manufacturing Industry.
  • Developed and implemented Software Release Management strategies for various applications according to the agile process.
  • Installed, Configured and Administered Hudson/Jenkins Continuous Integration Tool.
  • Developed build and deployment scripts using ANT and MAVEN as build tools in Jenkins to move from one environment to other environments.
  • Experience in the usage of data center automation and configuration management tools such as Ansible, Vagrant, Docker, etc.
  • Experienced with Windows, Linux/UNIX environments and scripting for Build & Release automation.
  • Developed automation framework for Application Deployments to the cloud environments.
  • Worked on Managing the Private Cloud Environment using Chef.
  • Performed Branching, Tagging, Release Activities on Version Control Tools: SVN, GIT.
  • Developed Perl and shell scripts for automation of the build and release process, developed Custom Scripts to monitor repositories, Server storage.
  • Automated the cloud deployments using chef, python (boto& fabric) and AWS Cloud Formation Templates.
  • Used Maven as build tool on Java projects for the development of build artifacts on the source code.
  • Deployed the Java applications into web application servers like JBoss.
  • Performed and deployed Builds for various Environments like QA, Integration, UAT and Productions Environments.
  • Worked on configuring the Jenkins to use MetaCase Software to build Java code and also to do the whole C.I process on the java code generated by MetaCase.
  • Experience with implementing project change control into software release management in multiple technical environments including UNIX, LINUX and Windows ansible.
  • Troubleshoot and resolved Build failures due to infrastructure issues reduced by 95% stabilizing the build process.
  • Setup and executed process to code review system effectively.
  • Troubleshoot Build and Deployment Issues, with little downtime.
  • Documented release metrics, software configuration process. Used Maven scripts to build the source code. Supported and helped to create Dynamic Views and Snapshot views for end users.
  • Paramtech CAD Services PvtLtd, earlier known as “PARAM TECHNOLOGIES” is a leading Consultant, Engineering services provider to the Automotive, Off Highway and Industrial Manufacturing Industry.
  • Environment: DevOps, Java, Ant, Maven, Jenkins, Hudson, Chef, Python, Perl, GIT, SVN, Apache Webserver, JBoss, Apache JMETER, MetaCase, GIT, SVN, Windows.

Build Release Engineer

Quantum-BSO Tech Pvt Ltd
03.2012 - 10.2013
  • Company Overview: Quantum is a transportation and logistics solution provider. A company, that delivers IT systems that are unique and the right fit for your organization.
  • Deployment and support for quality and production oracle databases.
  • Managed and administrated of all UNIX servers, includes Linux operating systems by applying relative patches and packages at regular maintenance periods using Red Hat Satellite server, YUM, RPM tools.
  • Planned and performed the upgrades to Linux (RHEL 5x, 6x, SUSE 10, 11, CENTOS 5, 6, operating systems and hardware maintenance like changing memory modules, replacing disk drives.
  • Handling NFS, Auto Mount, DNS, LDAP related issues.
  • Monitoring CPU, memory, physical disk, Hardware and Software RAID, multipath, file systems, network using the tools NAGIOS4.0 monitoring.
  • Performing failover and integrity test on new servers before rolling out to production.
  • Planned, scheduled and Implemented OS patches on Linux boxes as a part of proactive maintenance.
  • Identify, troubleshoot, and resolve problems with the OS build failures.
  • Used Chef for managing server application server such as Apache, MySQL, and Tomcat.
  • Installation, configuration, and customization of services Send mail, Apache, FTP servers to meet the user needs and requirements.
  • Performing kernel and database configuration optimization such that it limits I/O resource utilization on disks.
  • Quantum is a transportation and logistics solution provider. A company, that delivers IT systems that are unique and the right fit for your organization.
  • Environment: Sub Version, Clear Case, Gradle, Maven, ANT, Jenkins, Git, Chef, Hudson, ATG, Web Sphere, JBoss Application Servers, Apache Tomcat, Agile/Scrum, Python, Ansible, SDLC, Docker, Windows, Linux.

Build Release Engineer

Quantum-BSO Tech Pvt Ltd
03.2012 - 10.2013
  • Company Overview: Quantum is a transportation and logistics solution provider. A company, that delivers IT systems that are unique and the right fit for your organization.
  • Deployment and support for quality and production oracle databases.
  • Managed and administrated of all UNIX servers, includes Linux operating systems by applying relative patches and packages at regular maintenance periods using Red Hat Satellite server, YUM, RPM tools.
  • Planned and performed the upgrades to Linux (RHEL 5x, 6x, SUSE 10, 11, CENTOS 5, 6, operating systems and hardware maintenance like changing memory modules, replacing disk drives.
  • Handling NFS, Auto Mount, DNS, LDAP related issues.
  • Monitoring CPU, memory, physical disk, Hardware and Software RAID, multipath, file systems, network using the tools NAGIOS4.0 monitoring.
  • Performing failover and integrity test on new servers before rolling out to production.
  • Planned, scheduled and Implemented OS patches on Linux boxes as a part of proactive maintenance.
  • Identify, troubleshoot, and resolve problems with the OS build failures.
  • Used Chef for managing server application server such as Apache, MySQL, and Tomcat.
  • Installation, configuration, and customization of services Send mail, Apache, FTP servers to meet the user needs and requirements.
  • Performing kernel and database configuration optimization such that it limits I/O resource utilization on disks.
  • Quantum is a transportation and logistics solution provider. A company, that delivers IT systems that are unique and the right fit for your organization.
  • Environment: Sub Version, Clear Case, Gradle, Maven, ANT, Jenkins, Git, Chef, Hudson, ATG, Web Sphere, JBoss Application Servers, Apache Tomcat, Agile/Scrum, Python, Ansible, SDLC, Docker, Windows, Linux.

Education

Bachelor of Technology - Information Technology

CVR College of Engineering
01.2012

Skills

  • Problem-solving skills
  • Project coordination
  • Effective project coordination
  • Statistical analysis

Certification

  • AWS Certified DevOps Engineer – Professional, https://www.credly.com/badges/6039d520-ac8f-49b5-a808-e365be4acc5c/public_url
  • Scrum Certification for Java Developer (Scrum Java Developer – International Scrum Institute), 43243477073032

Languages

English
Full Professional

Interests

  • Web Development and Design
  • Coding and Programming
  • App Development
  • Drone Piloting
  • Video Game Design
  • Augmented Reality (AR) Development
  • Video Gaming
  • Designing and Printing 3D Models

Technical Summary

Linux, RedHat Linux, Ubuntu, Centos, Open SUSE, Windows, GitHub, Bitbucket, SVN, CVS, Jenkins, Docker, Chef, Puppet, Vagrant, Ansible, Kubernetes, ANT, Maven, Nagios, Prometheus, Grafana, Splunk, CloudWatch, ELK, Datadog, AppDynamics, Amazon Web Services (AWS), OpenStack, Microsoft Azure, Google Cloud Platform (GCP), Pivotal Cloud Foundry, Chef, Puppet, Ansible, Docker, VMware ESX/ESXi, Virtual Box, vSphere, C, Java, JavaScript, HTML, CSS, Shell scripting, Python, Django, YAML, JSON, Perl, PHP, Node.js, DB2, Oracle, SQL Server, MySQL, Cassandra, Web Logic, JBoss, Apache Tomcat, Web Sphere, IIS, FTP/SFTP, SMTP, HTTP/HTTPS, NDS, DHCP, NFS, TCP/IP, Code Commit, JIRA, Bugzilla, Remedy, ForgeRock OpenAM, OpenIG

Timeline

Production Application Support/Dev-Ops SRE Engineer

HBO
11.2019 - Current

Production Application Support/Dev-Ops SRE Engineer

HBO
11.2019 - Current

DevOps/AWS Engineer

Amtrak
08.2018 - 10.2019

DevOps/AWS Engineer

Amtrak
08.2018 - 10.2019

Dev-Ops AWS Engineer

State Street Corporation
04.2017 - 07.2018

Dev-Ops AWS Engineer

State Street Corporation
04.2017 - 07.2018

Software Developer

Fidelity Investments
06.2015 - 03.2017

Software Developer

Fidelity Investments
06.2015 - 03.2017

Build Release Engineer

Param Technologies
11.2013 - 05.2015

Build Release Engineer

Param Technologies
11.2013 - 05.2015

Build Release Engineer

Quantum-BSO Tech Pvt Ltd
03.2012 - 10.2013

Build Release Engineer

Quantum-BSO Tech Pvt Ltd
03.2012 - 10.2013

Bachelor of Technology - Information Technology

CVR College of Engineering
PRASHANTH JAKKULA