# status.CreatedAt errors: what we know so far
### Occurances
- https://bugzilla.redhat.com/show_bug.cgi?id=2099942
- https://bugzilla.redhat.com/show_bug.cgi?id=2094865
- https://github.com/kubernetes/kubernetes/issues/105332
### Symptoms
- A common pattern with these logs is the CreatedAt message has the container scope ending, and an `UpdateContainerResources` call failing:
```Jun 13 02:12:39 hz-tt-ranpz-rcp-oe20-sno-24kl05-1-1l hyperkube[5674]: E0613 02:12:39.802973 5674 remote_runtime.go:607] "verify ContainerStatus failed" err="status.CreatedAt is not set" containerID="0c95550d70c34c6e8616645bae3f53541001ca111e13e441cdf7c76b1d5d6c1e"
Jun 13 02:12:39 hz-tt-ranpz-rcp-oe20-sno-24kl05-1-1l hyperkube[5674]: E0613 02:12:39.803033 5674 kuberuntime_manager.go:1079] "getPodContainerStatuses for pod failed" err="status.CreatedAt is not set" pod="cran1/po-cran1-oamext-0"
Jun 13 02:12:39 hz-tt-ranpz-rcp-oe20-sno-24kl05-1-1l hyperkube[5674]: I0613 02:12:39.818837 5674 kubelet.go:2145] "SyncLoop (PLEG): event for pod" pod="cran1/po-cran1-up-0" event=&{ID:d9c906aa-deae-4f2f-8ed4-4a2f1899c4a7 Type:ContainerStarted Data:72d1044d43822e151e5a501a7793b1ce1358acb69ba4953cdfbc1aef67f010f0}
Jun 13 02:12:39 hz-tt-ranpz-rcp-oe20-sno-24kl05-1-1l systemd[1]: crio-0c95550d70c34c6e8616645bae3f53541001ca111e13e441cdf7c76b1d5d6c1e.scope: Succeeded.
Jun 13 02:12:39 hz-tt-ranpz-rcp-oe20-sno-24kl05-1-1l systemd[1]: crio-0c95550d70c34c6e8616645bae3f53541001ca111e13e441cdf7c76b1d5d6c1e.scope: Consumed 366ms CPU time
Jun 13 02:12:39 hz-tt-ranpz-rcp-oe20-sno-24kl05-1-1l systemd[1]: crio-conmon-0c95550d70c34c6e8616645bae3f53541001ca111e13e441cdf7c76b1d5d6c1e.scope: Succeeded.
Jun 13 02:12:39 hz-tt-ranpz-rcp-oe20-sno-24kl05-1-1l systemd[1]: crio-conmon-0c95550d70c34c6e8616645bae3f53541001ca111e13e441cdf7c76b1d5d6c1e.scope: Consumed 569ms CPU time
Jun 13 02:12:39 hz-tt-ranpz-rcp-oe20-sno-24kl05-1-1l hyperkube[5674]: E0613 02:12:39.874561 5674 remote_runtime.go:633] "UpdateContainerResources from runtime service failed" err="rpc error: code = Unknown desc = updating resources for container \"0c95550d70c34c6e8616645bae3f53541001ca111e13e441cdf7c76b1d5d6c1e\" failed: time=\"2022-06-13T02:12:39Z\" level=warning msg=\"Setting back cgroup configs failed due to error: Unit crio-0c95550d70c34c6e8616645bae3f53541001ca111e13e441cdf7c76b1d5d6c1e.scope not found., your state.json and actual configs might be inconsistent.\"\ntime=\"2022-06-13T02:12:39Z\" level=error msg=\"Unit crio-0c95550d70c34c6e8616645bae3f53541001ca111e13e441cdf7c76b1d5d6c1e.scope not found.\"\n (exit status 1)" containerID="0c95550d70c34c6e8616645bae3f53541001ca111e13e441cdf7c76b1d5d6c1e"
Jun 13 02:12:39 hz-tt-ranpz-rcp-oe20-sno-24kl05-1-1l hyperkube[5674]: E0613 02:12:39.874649 5674 cpu_manager.go:470] "ReconcileState: failed to update container" err="rpc error: code = Unknown desc = updating resources for container \"0c95550d70c34c6e8616645bae3f53541001ca111e13e441cdf7c76b1d5d6c1e\" failed: time=\"2022-06-13T02:12:39Z\" level=warning msg=\"Setting back cgroup configs failed due to error: Unit crio-0c95550d70c34c6e8616645bae3f53541001ca111e13e441cdf7c76b1d5d6c1e.scope not found., your state.json and actual configs might be inconsistent.\"\ntime=\"2022-06-13T02:12:39Z\" level=error msg=\"Unit crio-0c95550d70c34c6e8616645bae3f53541001ca111e13e441cdf7c76b1d5d6c1e.scope not found.\"\n (exit status 1)" pod="cran1/po-cran1-oamext-0" containerName="ctr-cran1-appsecretinitipsec-r" containerID="0c95550d70c34c6e8616645bae3f53541001ca111e13e441cdf7c76b1d5d6c1e" cpuSet="0-1,17-33,49-63"
```
- Looking at the `crictl inspect` output for the containers, they all are in UNKNOWN state, and the fields in the state are (mostly) zeroed out:
```
"state": "CONTAINER_UNKNOWN",
"createdAt": "1970-01-01T00:00:00Z",
"startedAt": "1970-01-01T00:00:00Z",
"finishedAt": "0001-01-01T00:00:00Z",
"exitCode": 0,
```
- However, looking at the runc spec:
```
"status": "stopped",
"bundle": "/run/containers/storage/overlay-containers/0c95550d70c34c6e8616645bae3f53541001ca111e13e441cdf7c76b1d5d6c1e/userdata",
"rootfs": "/var/lib/containers/storage/overlay/b18219fd06c309b00f95a23ab5789e8dbdb28787204cc3ec235e7e02c8ceb655/merged",
"created": "2022-06-13T02:12:35.942095467Z",
```
### Theories
- It's possible there's a path in `runc update` that does not well handle a container stopping in the middle of it.
- This could be a red herring, but the correlation is too frequent to ignore
- The state is likely absorbed into cri-o on the `UpdateContainerStatus` call after the container stopped.
- It's possible the 0 values come from runc,
- or from the json Marshalling of the runc output,
- or when we reset the state in `UpdateContainerStatus`, and read it in `StateNoLock()` call in the `ContainerStatus` call, we read an empty memory slot
- though it does seem like the replacement would be atomic
### Possible Solutions
- If we detect a CONTAINER_STATE_UNKNOWN at the end of a container status call, call UpdateContainerStatus again, hoping for better information from runc
- Find the actual root cause of this race