Working Group FDA
Last update: 2024-11-11
Please contact Fabian
Scheipl if you’re
interested in one of these BA or MA thesis topics or if you want to
discuss (related) ideas of your own.
For BA theses, we would keep the focus on refactoring, evaluating and
describing existing implementations and/or applying them to real data,
for MA theses either novel developments with detailed theory along with
clean and performant implementations would be expected, or challenging
analyses of more complex data sets with advanced methods.
tidyfun
is an R
package for
functional data analysis currently under development. Some of the issues
tracked on Github for this and its underlying infrastructure package
tf
could also be good topics for
theses.
Topic: Implementing and comparing functional principal component-based representations for functional data (BA/MA)
Functional data
In this topic you will survey the different techniques, (re-)implement
some of them for use in tf
(or: write glue code to integrate existing
implementations into tf
), and compare their performance in an
extensive benchmark study on real and synthetic data sets.
The topic is suitable for a wide range of programming skills and scientific ambitions. If both are high, this topic can even be extended to something that could result in a publication, especially if extended to FPC representations for non-Gaussian data like Dey et al., 2024, or to NN-based covariance estimators like Sarkar et al., 2022.
The functional data literature contains many possible definitions of
“function-valued quantiles”. We would pick out some of the most
relevant/interesting of these, summarize the relevant theory behind
them, implement them for use within tidyfun
, and perform a comparison
based on real and/or synthetic data sets.
A minimal BA thesis in this topic area would be re-implementing,
documenting and validating (most of) the methods in the
rainbow
package integrated into / as an add-on package for tf
& tidyfun
.
Extend tf
-classes and methods for multivariate functions with vector
outputs:
This is a large SWE task - scope would probably be limited to extending
either the tfd
or tfb
classes, and may require some major
refactoring of tf
to make such an extension work smoothly and
consistently (e.g. it probably requires defining new classes and logic
for function domain
s).
The Bayes Space paradigm developed by v.d. Boogart, Hron, Egozcue and
others (e.g. v.d. Boogart et
al. (2014),
Hron et al. (2016))
provides a way to represent probability measures so that their addition
and multiplication are well defined, enabling simple summary statistics
(means etc) as well as methods such as PCA or linear regression for
probability-density-valued data – i.e. the unit of observation is
represented by an entire probability distribution, not a single value,
and the inferential goal is typically to understand how other covariates
are associated with changes in these distributions. This has many
interesting applications, for example see Meier et
al. (2021) for differential effects
of family formation on gender-specific income distributions in East and
West Germany or Menafoglio et
al. (2021) for an
application to groundwater monitoring.
A thesis on this topic would
- summarize the necessary theoretic background and literature
- implement functionality for
tf
andtidyfun
that represents density data and performs arithmetic operations as well as basic statistics in Bayes space (e.g. also implement suitable ZB-Splines, see Skorna et al (2024) - apply this to an interesting real-world data set (or: replicate a published analysis in this context with the new implementation).
Topics: Write tidyfun
scripts for Craniceanu et al’s “Functional Data Analysis with R” / Ramsay et als’s “Functional Data Analysis with R and MATLAB” (BA)
Both of these books contain many chapters, data sets and case studies
that could also be done (mostly) using tidyfun
and/or refund
.
We’ll select some of them, you’ll identify and implement missing
functionality in tidyfun
with my help, and write them up with all the
necessary theoretical background and some extensions, in an online
document / as vignettes for tidyfun
.
Books: Craniceanu et al. (2024), Ramsay et al. (2009)
Pegoraro & Secchi (2023) develop
representations of (noisy, heterogenous) functional data that are
invariant to misalignment, i.e. representations that are suitable for
comparing and analyzing the shapes of unregistered curves while
discarding even fairly complex phase variability. In this thesis, you
would summarize the relevant mathematical background, implement the
techniques from the paper in R
& evaluate them on some data.
This topic involves quite advanced and interesting mathematics from
topology, metric spaces, and graph theory as well as differential
geometry. The paper to implement is state of the art, so this makes an
excellent topic for people considering staying in academia and looking
for a thesis topic that might turn into something publishable,
especially if any of the stretch goals below become part of the thesis.
Tasks would include:
- summarizing/explaining the maths behind the method
- re-implementing the algorithms and visualizations from the paper in
R
, preferably using infrastructure of / integrated intotf
/tidyfun
- benchmarking against similar approaches available in
R
- evaluation of the performance on real world datasets (e.g. mouse brain stem audiograms, bodyweight fitness movement patterns, story arc data, …)
Stretch goals could include developing a variant of Kim, Dasgupta, Srivastava (2023) based on merge trees instead of peak persistence diagrams or accommodating functional fragments/unequal domains with functions of different observed lengths.
manifun
is a small, unpublished
R
package for dimension reduction and embedding visualization
(primarily) for functional data. Possible tasks include implementing
suitable interfaces to mlr3
and/or
tidyfun
. Implementing AUMVC
framework could be included in this topic area as well.
The central goal of the project is to improve existing and implement new
embedding (i.e. dimension reduction) visualization approaches. Fairly
flexible, interactive versions of the kind of visualization shown below:
that includes e.g. tooltips/interactive
highlighting when hovering over specific data points, brushing for
selecting and highlighting specific embedding regions or curves, etc
have already been implemented in a previous MA thesis (EmbedIt
,
Jennert 2023).
Thesis goals could include:
- Re-implementing
EmbedIt
based on more performant software like D3.js or refactoring it for better responsiveness etc - Adding interactive 3D visualizations
- Implementing a “grand tour” and other classic multivariate exploration
tools (c.f.
tourr
) - Adding pre-processing and embedding steps to the existing app
- Writing up interesting case studies based on real world datasets (e.g. mouse brain stem audiograms, bodyweight fitness movement patterns, story arc data, …)
Beyond the methodological/theoretical topics below, we could develop more applied thesis topics in this context together with external partners that deal with large functional data sets such as the German Mouse Clinic (e.g. auditory brain stem response curves) or with (partners of) Prof. Christian Müller’s group at the Institute of Statistics.
Realistic evaluation of outlier detection should use real datasets with real outliers. Usually, this is done by selecting all majority class observations from a labeled dataset and contaminating them with a few randomly sampled instances from other minority classes. This approach yields “false” negatives/positives unless the minority class is really sufficiently and consistently different from the majority class observations. The goal of this project is to investigate under which circumstances this “unless” applies by comparing two approaches:
- use only datasets from the
mlr-fda
classification benchmark (pdf) that were predicted very accurately to generate outlier detection benchmark data - for the generated benchmark datasets, use detailed observation-level
mlr-fda
benchmark results to pick only those minority (and maybe also majority?) class observations that were consistently labeled correctly
Additionally, we are interested in how these results are affected by measures of dataset structure like separability ( pdf) and intrinsic dimensionality (pdf, CRAN).
The area-under-the-mass-volume-curve (AUMVC,
pdf) can be used to tune outlier
detection algorithms. Yet, a major caveat is that it relies on MC
simulations for approximating integrals and is thus not applicable to
high-dimensional settings. Combining it with dimension reduction and
manifold learning may allow to solve this issue.
The goal of this project is to implement the AUMVC framework in
manifun
and to conduct initial experiments. The central questions to
be answered are:
- How robust is the AUMVC approach to the ambient dimensionality of the data? This could be assessed by comparing results on image and functional data.
- A further question, which may be investigated: are there any distance measures for complex data such as images, which can be used to induce suitable bias for AUMVC to work on (embeddings of) such data?
- stretch goal: can AUMVC be adapted to sample only from the relevant space so it scales better to (nominally) high-dimensional data? e.g. by simulating data uniformly from the (convex hull) of the observed data or from lower dimensional (but: almost losslessly compressed) representations/embeddings of the data?
Multi-dimensional scaling (MDS) can be used to represent the outlier
structure of functional datasets
(pdf). However, MDS embeddings
represent the entire data structure (… hopefully, at least), not
just structural outlyingness. Since MDS embedding dimensions are
sorted by decreasing amount of “explained structure”, this might lead to
components of structural outlyingness being represented only in “late”
embedding dimensions in datasets with few outliers and complex
structured variation of high rank.
You would develop and evaluate a procedure for identifying embedding
dimensions (or 2D-subspaces of such embeddings) in which structural
outlyingness is reflected, i.e. the goal is to find relevant
(combinations of) embedding dimensions for outlier
visualization/detection (and, possibly, for tuning AUMVC if embeddings
are too high-dimensional, see below). Possible approaches:
- use something like HiCS (Link; this is computationally heavy)
- run Local Outlier Factor (LOF) or similar methods on each 2D-subspace and pick the ones where upper-tail (!) dependencies with global LOF scores are maximal – “upper-tail” only because the strength of association between global LOF scores and corresponding subspace LOF scores for low/intermediate values is irrelevant for this issue.
Replicate the simulation study and application examples of Aléman-Gomez et al. (2021) based on a less arbitrary geometrical-topological approach and compare results.
Both UMAP and t-SNE, state-of-the-art manifold learning methods, can be
used to detect and represent the cluster structure of complex,
high-dimensional data. However, which method is more suitable for the
task (or specific aspects of the task) under which conditions remains an
open question. Answering this question is complicated since several
hyperparameters need to be tuned for both methods and the underlying
task is unsupervised (i.e., tuning is hard…).
While there are indications that UMAP leads to “better” clusterings
(smaller intra-cluster distances, larger inter-cluster distances) when
high-dimensional data consists of clearly separate clusters, the
situation is less clear in situations with very close or even
overlapping clusters. The focus of this project is to assess the latter
setting and tasks include the following:
- set up extensive synthetic experiments to obtain an initial
understanding of the problem. Possible factors to assess:
- Overlap/Separability as a function of mean and variance of the underlying data generating processes
- Parameter sensitivity w.r.t. to dimensionality and number of observations
- Relevant parameters for improving separability
- investigate whether measures of separability (pdf) can be used to reliably infer the structure of data set
Simultaneous, auto-correlation-corrected confidence bands for functional regression coefficients (BA, MA possible with some extensions)
Confidence intervals for refund
’s functional regression models
estimated by pffr
rely on very restrictive assumptions about model
residuals and are only valid point-wise, not simultaneously across the
function. Therefore, they are inappropriate in many applications and
tend to yield over-optimistic results.
For this thesis, you would implement Liebl & Reimherr’s
(2020)
proposal for fast and fair simultaneous CIs for pffr
-fits and compare
its operating characteristics to bootstrap-based and conventional
alternatives.
These fairly old and rather badly written functions implement very general classes of penalized regression models (GAMs and GAMMs) for functional responses and/or predictors. Your thesis would be to re-write them from scratch with my help, using best practices for R programming like proper unit tests, input validation, and extensive documentation. This could also include developing a more stream-lined, consistent formula interface and developing better methods to deal with factor covariates and interaction effects as well as writing up some interesting case studies to be published as a vignette accompanying the package. See Scheipl & Greven (2017) for a review of the underlying methodology.