Content uploaded by Lukas Wegmeth
Author content
All content in this area was uploaded by Lukas Wegmeth on Sep 11, 2023
Content may be subject to copyright.
Exemplary workflow
Cooperative Meta-Learning Service for Recommender Systems
Lukas Wegmeth, Joeran Beel
Our overarching goal is to bring the power of AutoML to recommender systems
Our solution solves these problems or mitigates their resulting impact.
We present the idea of a cooperative meta-learning service for recommender systems.
Common recommender systems problems
1. Only a few, limited benchmarks are available
2. Most researchers and students repeatedly evaluate a few common data sets
3. Evaluation standards are often different between libraries
4. Publicly available algorithm selection is absent
The core of our idea
Meta-learning solves various problems in AutoML.
With meta-learning we:
1. enable algorithm selection
2. provide a thorough benchmark (tabular-like)
3. standardize evaluation routines
4. save computing power
Some challenges to overcome
•Select suitable meta-features and meta-learner
•Incentivize users to donate data to make the service better
•Make the server robust and validate user data properly
We implemented a proof-of-concept to show that it works in practice.
Recommender systems basics
Goal: predict future user-item interactions based on previous interactions
In practice: specialized machine learning application
Therefore: generalized (AutoML) concepts apply only under constraints
Contributions
•Collection of 70 public data sets with standardized loading routines.
•25 meta-features that are calculated in at most ten seconds.
•Client implementation for the LensKit library.
•Metadata with evaluations of 45 supported data sets on 7 LensKit
algorithms
Next steps
•More meta-features to improve the results further.
•Support for more client libraries and generic library support.
•General improvements to the meta-learner.
Results
Top 2 selection accuracy: 91.07% Top 2 selection accuracy: 76.67%
Top 3 selection accuracy: 97.78% Top 3 selection accuracy: 93.33%
•The results show the performance of our default Random Forest meta-
learner learned with the evaluations of 45 data sets on 7 algorithms.
•We used leave-one-out validation and averaged the results over 50
validation repetitions to reduce the impact of randomness.
Method RMSE Accuracy
Virtual Best 1.178 100%
Meta-Learner 1.186 64.09%
Single Best 1.191 35.56%
Method MAE Accuracy
Virtual Best 0.871 100%
Meta-Learner 0.879 57.78%
Single Best 0.881 44.44%