Developed a large language model (LLM) that achieved a 60% reduction in processing time for medical entities and information extraction.
For more informationAssisted in teaching undergraduate courses in Software Engineering, Statistical Modeling, and Information Retrieval, mentoring students and grading assignments.
Currently engaged in research on Retrieval-Augmented Generation (RAG) to develop advanced systems specifically designed for researcher users.
The project analyzes user reviews of the Threads Instagram app and develops initial predictive models, exploring different text processing techniques and machine learning models while identifying and addressing data imbalance and overfitting issues.
In this project, we developed multilingual machine learning models to classify toxicity in online conversations using English-only training data, with the goal of creating fair and effective tools to support healthier discussions across different languages.
This project examines the effect of weather on Walmart sales by combining and analyzing sales and weather data, using clustering algorithms to group weather stations, and building a machine learning model to predict sales, ultimately aiding in optimizing inventory management and business strategies.
this project focus on solving the sliding puzzle game using three different search algorithms: Iterative Deepening, Breadth-First Search (BFS), and A* (A-Star). The goal of the project is to rearrange a given square of squares (3x3 or 4x4) into ascending order by switching adjacent squares vertically or horizontally.
Thi project is implemetation for a chess-inspired game called Knight's Move. The game incorporates agile development and software quality assurance principles to create a fun and engaging experience. The project is developed using Java and follows the Model-View-Controller (MVC) architectural pattern.
I implemented the model from the "Attention Is All You Need" paper using the NumPy and TensorFlow libraries. This implementation is from scratch, without using any pre-built Transformer or attention blocks. I also used ChatGPT to refine the language (English) and improve the clarity of the code.
I implemented a Decision Tree algorithm for both classification and regression tasks, as well as an AdaBoost classifier algorithm in Python. The aim of this project is to understand the core concepts behind these machine learning algorithms and to practice their implementation from scratch.
Completed the IBM Professional Certificate in Machine Learning, gaining expertise in essential Machine Learning techniques, including Unsupervised Learning, Supervised Learning, Deep Learning, and Reinforcement Learning, with practical experience in Time Series and Survival Analysis.
completed the Deep Learning Specialization, gaining expertise in neural network architectures, including CNNs, RNNs, LSTMs, and Transformers, along with strategies like Dropout and BatchNorm. They applied these skills using Python and TensorFlow in real-world cases such as speech recognition and NLP, preparing them to advance in AI technology.
completed the Mathematics for Machine Learning and Data Science Specialization, gaining a strong foundation in core mathematics, including linear algebra, calculus, probability, and statistics, essential for advancing in AI and machine learning.
completed the course "Natural Language Processing with Classification and Vector Spaces," where I gained skills in using logistic regression, naïve Bayes, and word vectors for tasks such as sentiment analysis, analogy completion, and word translation. I also developed expertise in machine translation, locality-sensitive hashing, and vector space models.
completed the course "Natural Language Processing with Probabilistic Models," where I gained skills in using dynamic programming, hidden Markov models, and word embeddings for tasks such as autocorrect, autocomplete, and part-of-speech tagging. I also developed expertise in N-gram language models and Word2vec.
completed the course "Natural Language Processing with Sequence Models," where I gained skills in using RNNs, LSTMs, GRUs, and Siamese networks for tasks such as sentiment analysis, text generation, and named entity recognition. I also developed expertise in word embedding, natural language generation, and named-entity recognition.