Trajectory Next-Location Prediction Evaluator¶
Evaluation Metrics¶
For the task of trajectory next-location prediction, this evaluator implements a series of TopK-based evaluation metrics.
Here is the symbol table of our evaluation metrics.
Symbol |
Meaning |
---|---|
\(N\) |
The number of test data |
\(i\) |
The i-th test data |
\(K\) |
The top K prediction outputs for evaluation |
\(T(i)\) |
The real next hop position in the i-th test data |
\(R(i)\) |
The set of the top K locations in the prediction result of the i-th test data |
\(Hit(i)\) |
The set of predicted hit locations in the i-th test data, which means \(T(i) \cap R(i)\) |
\(Rank(i)\) |
The ranking of T(i) in R(i) in the i-th test data |
\(|*|\) |
The modulo operator of a set |
Using the above symbols, the calculation formula of TopK evaluation metrics is:
Metric |
Formula |
---|---|
Precision |
\[Precision@K=\frac{\sum_{i=1}^{N}|\operatorname{Hit}(i)|}{N \times K}\]
|
Recall |
\[Recall@K=\frac{\sum_{i=1}^{N}|\operatorname{Hit}(i)|}{N}\]
|
F1-score |
\[F1@K=\frac{2 \times \text { Precision@ } \times \text { Recall@ } K}{\text { Precision } @+\text { Recall@ } K}\]
|
Mean Reciprocal Rank |
\[MRR@K=\frac{1}{N} \sum_{i=1}^{N} \frac{1}{\operatorname{Rank}(i)}\]
|
NDCG |
\[NDCG@K=\frac{1}{N} \sum_{i=1}^{N} \frac{1}{\log _{2}(\operatorname{rank}(i)+1)}\]
|
Evaluation Settings¶
The following are parameters involved in the evaluator:
Location: libcity/config/evaluator/TrajLocPredEvaluator.json
metrics (list of string)
: Default to["Recall"]
. Range in["Precision", "Recall", "F1", "MRR", "MAP", "NDCG"]
.topk (int)
: Default to1
.