While  setting up OKD 3.11 Cluster using the openshift ansible package or the  clone from the source repo many a time it fails. After getting the control pane  up in master nodes, it sets up registry, router. Then it sets up web console for accessing  the culster using web UI.

For deploying applications and exposing  them publicly openshift-ansible uses a variable  openshift_master_default_subdomain, which works as a default sub-domain for routes to the application service. Now even if we set the variable with  proper FQDN, installation fails saying web-console did not come up/crash back off.

Going through the logs says, web-console public-url is not having valid url. We have provided a FQDN as a value to be used by application as public url, but the value getting used used in the pod  is the variable name, not the value in we set for the variable.

adminConsolePublicURL: https://console.{openshift_master_default_subdomain}/
consolePublicURL: https://master.example.com:8443/console/

As a result web-console pod fails saying Public URL is not a valid fqdn/ip. After digging deep for the value, I see web console pod is using a configmap for setting up all the environment variables.

$ oc describe deploy/webconsole -n openshift-web-console
...
      Mounts:
          /var/serving-cert from serving-cert (rw)
          /var/webconsole-config from webconsole-config (rw)
  Volumes:
   serving-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  webconsole-serving-cert
    Optional:    false
   webconsole-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      webconsole-config
    Optional:  false

Now we  need to look for the public url value in the config map and replace it  with the proper FQDN.

$oc describe cm/webconsole-config -n openshift-web-console

...
apiVersion: webconsole.config.openshift.io/v1
clusterInfo:
  adminConsolePublicURL: https://console.master.example.com.nip.io/
  consolePublicURL: https://master.example.com:8443/console/
  loggingPublicURL: ''
  logoutPublicURL: ''
  masterPublicURL: https://master.example.com:8443
  metricsPublicURL: ''
extensions:

Once we update the config map, in the next re-try of replicaset to run the pod, it takes the proper value and web-console is up. :)