evaluator
PeriodicEvaluator
Periodic Evaluation assesses the entire incremental word embeddingsmodel's performance using an intrinsic NLP task-related test dataset after a set number, p, of instances, have been processed and trained. This allows for the continuous evaluation of the model's accuracy and helps identify improvement areas.
Source code in rivertext/evaluator/eval.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 |
|
__init__(dataset, model, p=32, golden_dataset=None, eval_func=None, path_output_file=None)
Create a instance of PeriodicEvaluator class.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset |
IterableDataset
|
Stream to train. |
required |
model |
IWVBase
|
Model to train. |
required |
batch_size |
batch size for the dataloader, by default 32 |
required | |
golden_dataset |
Callable
|
Golden dataset relations, by default None |
None
|
eval_func |
Callable[[Dict, np.ndarray, np.ndarray], int]
|
Function evaluator acording to the golden dataset, by default |
None
|
Source code in rivertext/evaluator/eval.py
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
|
run(p=3200)
Algorithm executes periodic assessments of the entire model every p instances, providing continuous evaluation and identification of areas for improvement.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
p |
int
|
Number of instances to process before evaluating the model, by default 3200. |
3200
|
Source code in rivertext/evaluator/eval.py
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
|