NPS VM does not start up if NPS host is rebooted
NPS VM is in poweroff state.
NPS host is rebooted intentionally or accidentally.
- Login to the NPS host using valid credentials.
-
Navigate to the path where the vagrant file is located.
cd <sync_folder>/nps
-
To check the status of the NPS VM, run the following command:
source source.rc vagrant status
-
If the status is displayed as
poweroff, run the following command to bring up the NPS VM:
vagrant up
-
To check the status of the NPS VM, rerun the following command:
vagrant status
-
Once the status is displayed as
running, login to the NPS VM from outside the NPS host using the OAM IP.
NOTE:I
- Run the following command to check the routes configured on the NPS VM:
route -n
- Run the following command to delete the route created with NAT network:
route del -net 0.0.0.0 gw <gateway_ip_for_NAT_network>
-
If the NPS VM is not accessible from outside the NPS host using the OAM IP, login/ssh to the NPS VM from NPS Host and delete the default route created with NAT network.
- To check the routes configured on the NPS VM, run the following command:
route -n
- To delete the route created with NAT network, run the following command:
route del -net 0.0.0.0 gw <gateway_ip_for_NAT_network>
- To check the routes configured on the NPS VM, run the following command:
- Login to the NPS VM and wait for a few minutes for the Kubernetes service to come up.
-
To check the status of all the Pods, run the following commands:
kubectl get nodes -n nps kubectl get pods --all-namespaces
NOTE:All Pods must be in Running state.