Hi Nathan,
The code in the demo notebook is used only for delay inference, not for
training. In this code, we load a model that we trained using the
RouteNet implementation in "routenet_with_link_cap.py". Then, we load
samples from our datasets (generated with our packet-level simulator),
make the inference with the RouteNet model and finally compare
RouteNet's predictions with the values of our ground truth.
In this case it is not necessary to normalize the output parameters
(i.e., delay) since we are not using them for training. We only
normalize the input parameters of RouteNet (traffic and link
capacities). Note that we then denormalize RouteNet's predictions to
compare them with the real (denormalized) delay values of the ground
truth (variable "label_Delay"):
predictions = 0.54*preds + 0.37
Regards,
José
El 28/09/19 a las 17:28, Nathan Sowatskey escribió:
Hi
I have noted that the normalisation applied in the demo notebook here:
https://github.com/knowledgedefinednetworking/demo-routenet/blob/master/dem…
Does not apply the same normalisation as the code here:
https://github.com/knowledgedefinednetworking/demo-routenet/blob/master/cod…
Specifically, delay is not normalised in the demo notebook.
The demo notebook loads a checkpoint from here:
https://github.com/knowledgedefinednetworking/demo-routenet/tree/master/tra…
This model, then, was created without normalising the delay also. That
implies that the code that was used to train that model is not the
same code that is in the routenet_with_link_cap.py code at the link above.
In simpler terms, the demo notebook prediction does not work if the
delay is normalised as at routenet_with_link_cap.py#L85. So, the code
for training given in this repository is not compatible with the demo
notebook and the trained model used as an example.
Regards
Nathan
_______________________________________________
Kdn-users mailing list
Kdn-users(a)knowledgedefinednetworking.org
https://mail.n3cat.upc.edu/cgi-bin/mailman/listinfo/kdn-users