Experience
U.S. Food & Drug Administration (FDA)
Jan 2024 – Present
- Explored and identified datasets for large language models (LLMs), focusing on question-answering tasks for biomedical and scientific research.
- Performed thorough data cleaning and preprocessing to prepare datasets for LLM-based tasks.
- Tested various LLM models, including Llama 3, Llama 3.1, Llama 3.2, and Llama 3.3, to evaluate their performance in terms of response quality, accuracy, and response time.
- Developed Python scripts for end-to-end processing, including dataset formatting, prompt creation, output generation, and performance comparison using metrics like Euclidean distance and cosine similarity.
- Utilized Nomic Embed model for calculating embedding vectors to enhance similarity comparisons and improve response accuracy.
- Configured and managed AWS CLI to interact with AWS services, automating tasks such as resource provisioning, deployments, and monitoring through command-line operations.
- Leveraged AWS Bedrock to deploy pre-trained foundation models, enabling the development and scaling of customized AI/ML applications efficiently.
- Leveraged aider-chat, an AI-driven coding assistant, to streamline software development processes, enhance code quality, and accelerate project timelines.
- Designed and implemented a benchmarking system using SQLite3 to store and analyze LLM-generated results.
- Utilized Amazon S3 to store data and integrated it with applications to facilitate file uploads, downloads, and sharing of assets in a cloud environment.
- Used GitLab to manage and update changes in project code, ensuring version control, collaboration, and seamless integration of new features and bug fixes.
- Conducted Q/A benchmarking to ensure accuracy and proper referencing in responses, compiling detailed evaluation reports.
- Packaged applications using Docker with all necessary dependencies, optimized image sizes, and managed environments with Python virtual environments (venv) and requirements.txt for seamless deployment across AWS and HPC systems.
- Containerized applications using Docker and deployed them on AWS servers, validating performance in a high-compute environment.
- Created systemd service configurations to ensure high availability and automatic startup on system boot.
- Automated critical processes, including system updates and log capture, to streamline operations, reduce manual effort, and enhance monitoring and troubleshooting.
- Established a new pre-production environment, bridging the gap between development and production to enhance stability.
- Monitored AWS EC2 instances to optimize resource utilization, ensure cost efficiency, and improve overall system performance.
- Collaborated with cross-functional teams to secure GPU access and optimize AI model performance.
- Provided ongoing support, documentation, and system enhancements to improve workflow efficiency and maintain system reliability.
Environment: AWS, LLM, Bash, Python, GitLab, Docker, AWS Bedrock
LearnBeyond Consulting
Dec 2021 – Jan 2024
- Experience in distributed messaging systems like Apache Pulsar (similar to Apache Kafka) for building scalable, real-time data streaming applications.
- Developing tooling and automation for environment, containers, and build & deployment pipelines.
- Deep understanding of different messaging paradigms (pub/sub, queuing), as well as delivery models, quality-of-service, and fault-tolerance architectures.
- Deep understanding of the Pulsar architecture along with interplay of architecture components: brokers, Zookeeper, producers/consumers, Bookkeeper, Streams.
- Defined security groups which acted as virtual firewalls to control the incoming traffic onto one or more EC2 instances.
- Implemented and maintained the monitoring and alerting of production and corporate servers/storage using AWS CloudWatch.
- Developed scripts for build, deployment, maintenance, and related tasks using Jenkins, CloudFormation templates, and Bash.
- Created and developed deployments, namespaces, Pods, Services, Health checks, and persistent volumes etc., for Kubernetes.
- Responsible for Continuous Integration (CI) and Continuous Delivery (CD) process implementation using Jenkins Pipelines along with Python and Shell scripts to automate routine jobs.
- Configured ServiceNow to receive instant notifications of any configuration changes in the cloud environment.
- Created Datadog dashboards for various applications and monitored real-time and historical metrics.
- Experience with Elasticsearch, Logstash, Prometheus, Kibana for centralized logging and storage logs with S3 Bucket using Lambda Function.
- Troubleshoot and fix production and Pre-Production issues as and when required and document/communicate the resolution notes to other team members.
- Participated in 24x7 On-call rotation.
Environment: Jenkins, Pulsar, Kafka, AWS, Grafana, ULM, Bash, Python, GitLab
Accenture
Jan 2014 – July 2018
- Worked as a Functional SAP Consultant.
- Handled SAP SD (Sales & Distribution) and SAP MM (Material Management) modules.
- Automated the customizations using LSMW tool.
- Expert in Shipping Point Determination, Route Determination, Transport and delivery scheduling, Backward delivery scheduling, Partial and Complete delivery.
- Worked on MTS (Make-to-Stock), MTO (Make-to-Order), Make-to-Order for Configurable Material, Stock Requirement/ MRP Lists, variant configuration, super BOMs, phantom items.
- Improved productivity by solving change Requests within Turn Around time.
- Developed and executed test plans and test cases.
- Identified bugs, monitored defect tracking systems, and performed tracking of non-testable software.
Environment: SAP SD, SAP MM, SAP Customization & Automation
Soniks Consulting
June 2013 – Dec 2013
- Performed day-to-day jobs such as monitoring log files, writing, and running the scripts to automatically watch the resources, CPU, memory.
- Created users and groups for certain departments. Configured DHCP for dynamic IP scheduling.
- Written shell scripts for automation of jobs, system monitoring, and error reporting.
- Managed SVN repositories for branching and merging.
- Used putty for reading, writing, executing the PERL/Shell scripts.
- Performed User acceptance testing (UAT) to all the test scenarios before intimating to the business.
- Worked with ServiceNow tool to handle change requests and incidents reported.
- Monitored servers and escalated emergency technical issues beyond scope to maintain optimum up-time.
- Strong scripting skills (Python, bash) to design and implement automation within the infrastructure.
- Provided after-hours on-call support by participating in the on-call rotation.
- Prepared various statistical and financial reports using MS Excel.
- Strong verbal and written communication skills, and an ability to work on project teams, with stakeholders, and across departments.
Environment: Linux, Bash script, MS Office, SVN