Independent research project building a machine learning system to recognise and classify sign language gestures from image datasets. The system applies image preprocessing pipelines, feature extraction, and classification models trained on labelled gesture data, iterating on model accuracy and generalisation to work toward a real-world assistive tool for the deaf and hard-of-hearing community. Research combines computer vision techniques with accessibility-focused evaluation criteria.
Systematic analysis of 47 published case studies across North American, European, and Asia-Pacific residential property markets. Applied Random Forest modelling, regression analysis, and time series analysis alongside a cross-validation framework with uncertainty quantification to assess how digital systems affect operational efficiency across diverse organisational contexts.
When I started GestureKey, I assumed the hard part would be the model architecture. It wasn't. The hard part was the data — specifically, how inconsistent lighting, hand angle, and background noise in labelled datasets can silently destroy your model's ability to generalise. Here's what the first few months of building an accessibility-focused computer vision system taught me about the gap between benchmark accuracy and real-world performance.
Most portfolio pages say "built a full-stack system." This is what that actually meant, the data model, the API structure, how I handled the admin dashboard without a dedicated analytics service, and the decisions I'd make differently next time.
Running a cross-validation framework across 47 international case studies sounds clean on a CV. The reality involved messy data, conflicting methodologies across papers, and a lot of decisions about how to handle uncertainty quantification when your sources disagree. A reflection on the research process.
Notes from applying natural language processing to real text datasets, the difference between what LLM papers describe and what working with raw text actually requires.
Building ML systems for accessibility means your evaluation criteria are different from benchmark tasks. A short essay on why optimising for real-world use by the deaf community changes how you think about model performance.
Full posts coming soon — get in touch if you'd like to discuss any of these topics.