Side Projects

The Lauren Condominiums

  • September 15, 2022

The Lauren Condominiums Washington DC

Sample Eye Surgery Center Ad Hoc Analysis

  • August 30, 2022

Eye Surgery Center

Deloitte Peaks Pike Analysis

  • April 23, 2021

Deloitte

Personal Website Version 1

  • May 20, 2019

Personal

TBE Sample Projects

Project Streamlining Process

  • May 2022

Teledyne Brown Engineering

Data Set Quality Tool

  • April 30, 2022

Teledyne Brown Engineering

L23A Surge tech and Surgeon Billet Analysis Visualization

  • February 2022

Teledyne Brown Engineering

Emergency Medicine and Orthopedic surgery

  • November 2021

Teledyne Brown Engineering

R and Python COVID Vax Analysis

  • July 2021

Teledyne Brown Engineering

COVID Master Sheet Generator

  • May 2021

Teledyne Brown Engineering

Exploratory Data Analysis Dashboard

  • April 2021

Teledyne Brown Engineering

Ace Info Sample Projects

NDFD Usage Dashboard

  • May 2009

NOAA

NDFD Statistics Viewer

  • May 2009

NOAA

Mock QPFVS

  • May 2009

NOAA

First Sample Geo Maps

  • May 2009

NOAA

GWU Sample Projects

Automated White Blood Cell Differential

  • Integrated Information Systems Capstone - ISTM 6210
  • May 12, 2020

Developed a program that classifies White Blood Cells (WBCs) to do differentials in the medical laboratory. It can provide a cheaper, faster, more accurate alternative to do manual differentials done by medical laboratory technicians. The application starts with the home screen having login buttons for patients and medical personnel. Medical personnel can be technicians or doctors. Technicians are able to run new specimens using the machine learning classification algorithm that automatically classifies and counts the white blood cells. They can view the images, database, patient information and specimen results. Doctors are able to review cell images and comment on out-of-range specimen results. They finalize the result as normal or abnormal. Patients are able to view their results when matching patient information is entered. Used Python, Keras and TensorFlow to create machine learning models and PyQt for the Graphical User Interface. Used Agile Methodology to plan, code, test and review the product in different phases or sprints. This is a more effective approach compared to the Waterfall Method since there were a many features added, changes and removed throughout the whole process. This type of software can be merged with current technologies that madical laboratory companies have such as Epic, Seimens, Sysmex, Abbott, etc.

    Traffic Accidents Dashboard

    • Data Mining - DATS 6103
    • April 28, 2020

    This application is a dashboard that analyzes and visualizes a dataset of US Accidents with 2.9 million observations and 49 features. The dataset is captured by a variety of entities such as the US and state departments of transportation, law enforcement agencies, traffic cameras, and traffic sensors within the road-networks. It preprocesses the random sampled data in the back end which uses Imputation methods, Principal Component Analysis, Standard Scaler and Grid Search with Cross Validation. It also performs Exploratory Data Analysis (EDA) and displays histograms, correlograms, scatterplots and the summary of statistics of the variables. The tool enables the user to create models using different Machine Learning Algorithms and displays the model accuracy. The user can use the models to predict the severity of accidents that could happen when specific arguments are passed as parameters. The prediction uses machine learning models such as Decision Trees, Random Forest, Logistic Regression, K-Nearest Neighbors, Support Vector Machine and Linear regression. PyQt was used for the Graphical User Interface along with a few lines of Java Script for the map visualization. Folium was used for the geo map rendering with data points along with a Clustering algorithm to group data points with close coordinates. The approach taken was developing the code while testing all its working features at the same time in all possible orders. Once a working feature is developed, the whole application is run and tested to ensure features are properly functioning. Fixes are made immediately upon discovery of the bugs to save time in the developing and testing phase.

      Instagram Facial Recognition

      • Machine Learning I - DATS 6202
      • August 20, 2019

      Built a facial recognition application using Python and Plotly Dash that detects and recognizes faces in the webcam feed by using Haar Cascades and Histogram of Gradients (HOG) algorithms. The program also implements Web Scraping to pull information from Instagram and displays it on the screen. The program uses OpenCV, a computer vision package. The final application was a product of continuously building on top of the previous features. Starting from face detection from pictures, video detection, live video detection, live video classification, web scraping, managing display, and finally adding GUI functions like train model and different classifier algorithms (not shown in this video).

        Machine Learning Function Approximation

        • Machine Learning I - DATS 6202
        • October 20, 2019

        This application approximates a given function by learning its mistakes overtime. The blue dotted line represents the given function and the red line represents the machine. The red line is initialized using random numbers and eventually will approximate the function given by determining how much error (mistakes) it has made each epoch (iteration/time). Coded with Python using early stopping techniques (comparing training and validation errors) to avoid over-fitting of the model, and back propagation of sensitivities to update weights and biases. Program outputs number of epochs to determine how number of neurons in the hidden layer and learning rates affect convergence properties.

          Machine Learning Linear Classification

          • Machine Learning I - DATS 6202
          • July 24, 2019

          The program was created using Python and Machine Learning Algorithms to classify inputs based on their targets (Supervised Learning). It draws a straight line to separate and correctly classify the two classes (as long as they are linearly separable).

            FIFA 2019 Data Analysis

            • Introduction to Data Science - DATS 6101
            • November 19, 2019

            Performed statistical analysis to determine correlation and significance of findings on FIFA 2019 Soccer Player Dataset having 18,207 records with 89 variables. Used R programming along with ANOVA, Tukey testing, Regression, QQ-Plots, Leaflet data visualization. Outlier Data Cleanup, Chi-Square testing, Z-Test and T-Test.

              NYC Airbnb Data Analysis

              • Introduction to Data Science - DATS 6101
              • October 15, 2019

              Performed statistical analysis to determine correlation and significance of findings on New York City Airbnb Dataset having 48,896 records with 16 attributes. Used R programming along with ANOVA, Tukey testing, Regression, QQ-Plots, Leaflet data visualization. Outlier Data Cleanup, Chi-Square testing, Z-Test and T-Test.

                Java Alien Attack Game

                • Software Engineering I - CSCI 2113
                • December 09, 2018

                Built a game called “Alien Attack” using Java and JFrame GUI components for the User Interface. Used Objects and Classes for creating the game and a Unified Modeling Language (UML) to create a diagram for visualizing the relationships, attributes and functions of the classes. The project approach was building an object-oriented diagram plan before execution. This plan was revised and changed as needed as the project progressed.

                  GWU Hospital OCTAVE Allegro Risk Assessment

                  • Information Systems Security - ISTM 6206
                  • December 05, 2017

                  Assessed information security risks in The George Washington University Hospital. Performed OCTAVE Allegro Risk Assesment by establishing risk measurment criteria, developing information asset profile, identifying information asset containers, identifying areas of concern, identifying threat scenarios, identifying risks, analyzing risks, and finally selecting the mitigation approach.

                    Game Engine 3D Environment

                    • Computer Graphics - CSCI 4554
                    • April 2019

                    This 3D Environemnt was rendered using a game engine application called Unity.

                      Open GL 3D Rendering

                      • Computer Graphics - CSCI 4554
                      • April 2019

                      This 3D environment was rendered using OpenGL and C++. The specific techniques used are Texture Mapping, Backface Removal, Illumination, and Camera Movements using keyboard inputs.