Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
# SAGEA: Sparse Autoencoder-based Group Embeddings Aggregation for Fairness-Preserving Group Recommendations This repository contains source codes, raw results and analysis scripts to reproduce the results of the SAGEA group recommendation approach subitted to the RecSys 2025 LBR. **Source codes and raw results are available through the linked GitHub repository** (see Files tab) - See [Code Details](https://osf.io/zkqf3/wiki/Code%20details/?view_only=1a834cd29fcf4765a433bdffdf5d3791) for more info on how to run the repository ## Abstract Group recommender systems (GRS) deliver suggestions to users who plan to engage in activities together, rather than individually. To be effective, they must reflect shared group interests while maintaining fairness by accounting for the preferences of individual members. Traditional approaches address fairness through post-processing, aggregating the recommendations after they are generated for each group member. However, this strategy adds significant complexity and offers only limited impact due to its late position in the GRS pipeline. In contrast, we propose an efficient \emph{in-processing} approach based on a combination of (1) monosemantic sparse user representations generated via a sparse autoencoder (SAE) bridge module, and (2) fairness-preserving group profile aggregation strategies. By leveraging these \emph{disentangled and interpretable representations}, our Sparse Autoencoder-based Group Embeddings Aggregation (SAGEA) approach enables transparent, fairness-preserving profile aggregation within the GRS process. Experiments show that SAGEA improves both recommendation accuracy and fairness over profile and results aggregation baselines, while being considerably more efficient than post-processing techniques. ## Reproducibility notes The ELSA model was trained on the training set users for up to 100 epochs with a batch size of 1024 and early stopping applied after 25 epochs without improvement. We used the Adam optimizer \cite{kingma_adam_2017} with a learning rate of $1 \times 10^{-4}$, $\beta_1 = 0.9$, and $\beta_2 = 0.99$ The SAE variants were trained for up to 4,000 epochs with early stopping applied after 250 epochs without improvement, using a batch size of 1024. We used the Adam optimizer with a learning rate of $1 \times 10^{-5}$, $\beta_1 = 0.9$, and $\beta_2 = 0.99$ For the comparison against baselines, we selected the best-performing hyperparameters for each SAGEA aggregation strategy w.r.t. the Borda count of $\text{NDCG}_{com}$ and $\text{NDCG}_{min}$ evaluated on validation groups.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.