Project

subsampwinner

Feature selection tooling built around the Subsampling Winner Algorithm, with an emphasis on stability under repeated resampling.

subsampwinner packages a feature-selection workflow motivated by instability in finite-sample model fitting. The main interest is not just finding a sparse set of predictors, but understanding how selections behave across repeated perturbations of the observed data.

The project sits at the intersection of statistical learning, reproducibility, and practical tooling. It reflects a broader theme in my work: methods are only useful if their behavior is legible under perturbation, noise, and imperfect data pipelines.

Open to new collaborations

Looking for work that connects statistical rigor with practical systems.