Shreyas Prakash
About
Shreyas Prakash is from San Francisco Bay Area. Shreyas works in the following industries: "Telecommunications", "Higher Education", "Automotive", and "Information Technology & Services". Shreyas is currently Software Engineer at Tesla, located in San Francisco Bay Area. In Shreyas's previous role as a Software Engineer Intern at Tesla, Shreyas worked in San Francisco Bay Area until Apr 2021. Prior to joining Tesla, Shreyas was a Software Engineer Intern at Nokia and held the position of Software Engineer Intern at San Francisco Bay Area. Prior to that, Shreyas was a Research Trainee at Harvard University, based in Greater Boston Area from Nov 2019 to Apr 2020. Shreyas started working as Software Engineer at Wipro Limited in Bengaluru Area, India in Aug 2018. From May 2017 to Jul 2017, Shreyas was Software Engineer Intern at Mitra Softwares, based in Shivamoga Area, India.
You can find Shreyas Prakash's email address at finalscout.com. FinalScout is a free professional database with over five hundred million business professional profiles and over two hundred million company profiles.
Shreyas Prakash's current jobs
Shreyas Prakash's past jobs
- Built Kafka Connect monitoring application which monitors configs, status and offset topics along with consumer lag for the list of consumers. - Created a Spring Boot job to pull connect metrics from Kafka Broker extracting messages from config, offset and status topics & Sync it with ElasticSearch. - Developed a Spring Boot job to pull Consumer metrics periodically based on the given consumer groups using Kafka Admin Client library and sync data to ElasticSearch. - Implemented ElasticSearch index rotation for Kafka Connect status, offset & consumer group data. Purged indexes older than 30 days. - Responsible for implementing SSL configurations & ADFS token for conductor calls. Technologies and Tools Used: Java(Spring Boot), Kafka, Kafka Connect, ElasticSearch, Kibana, Bitbucket, JIRA
- Built RESTful API's using Flask(Python framework) for retrieving information about Nodes, TB servers , K8s, GlusterFS and Regression environments of various setups and Infra servers. - Implemented Serialization and Deserialization of JSON data using Marshmallow for validation and optimization. - Designed high level Architecture for the distributed system which is responsible for checking health of the server and maintain transaction details. Implemented LRU Cache Eviction Technique, Consistent Hashing Database Partitioning Technique and IP Hash Load balancing technique which improved the system performance by 50 %. - Auto Integrated RESTful API responses with MongoDB using SQLAlchemy. Performed Analytics on the data stored in the MongoDB. Responsible for optimizing the performed analytics. - Created a UI/UX using Jinja2, Flask, HTML5 and CSS3 for performing operations on Setups and Infra Servers. Integrated UI with RESTful API's and worked on creating dynamic content. - Responsible for running and deploying Ingress, ELK(Elasticsearch, Logstash, Kibana), FSP(Fabric Services Platform), Grafana on Kubernetes(K8s) and perform health check. Configured pods to use PVC (PersistentVolumeClaim) for storage. - Created Dynamic access of Gluster volumes through Heketi which acts as RESTful API interface. Technologies and Tools Used: Python, Flask, NoSQL, MongoDB, Marshmallow, SQLAlchemy, Kubernetes, Dockers, Containers, Gitlab, Confluence, VS code, JIRA.
- Built Python Scripts for "Automated Tasks" and "Manual Tasks" of SPINE workflow described in JSON using Pydra. - Implemented two entities "Workflow Executor" and "Task Execution" in SPINE representing running instances of Workflows and Tasks with specific set of inputs. - Executed instances of a Workflow with a specific set of inputs, configuration and producing outputs. The “Workflow” is the model and the “Workflow Executor” is an instance of a workflow execution. - Responsible for creating an Algorithm for "Workflow Editor" which helps researchers across the world to perform "Manual Tasks" and "Automated Tasks" of SPINE workflow and gives accurate results.
- Created efficient mechanism to store the large set of modeling data using Microsoft Azure Cloud Services. - Developed Automated Software using AngularJS and ASP.NET MVC which was responsible for testing variations in Transactions, delays and Important Triggers. Software was tested on 15 platforms for variations and triggers. - Worked with Oracle 11g and MySQL to maintain the large set of data and also provided with the efficient queries for information retrieval. - Worked with Amazon Web Services and Crosslink to provide access to various organizations and also to control the user interface. - Certificate holder of Wipro Code of Business Conduct.
- Developed an Automated Billing Software using IntelliJ IDEA , HTML5, CSS3, PHP and Wampserver. - Worked efficiently in analyzing the code for optimization. - Worked with MySQL server which gives cross platform support. It also supports group commit, gathering transactions from multiple connections together which increases the number of commits per second. It was also used to provide a feature to cache the query which helps in fast retrieval of information.