Now a days logs are everything. Analysing logs helps understand the applications on runtime. Wait, but how do we manage such huge logs? For managing logs and generating meaningful insights from them AWS CloudWatch is a good place to go for. For understanding behaviour of a kubernetes or openshift cluster understanding the logs from it is a good instrument. To get through this we will take an OpenShift cluster as example here.
So, what log?
- Application Logs: logs from different applications running in the pods. They are mainly the logs printed to stdout or stderr.
- Infra Logs: Logs from the nodes on the resources like CPU, memory, disk usage, network. their usage details.
- Audit Logs: Logs from the kubernetes authentication, resource changes. Changes on RBAC, access on different resources from different users and roles.
Fluentd as log collector: Fluentd is used as collector of the logs form all the nodes in an OpenShift cluster. It gets the stdout and stderr logs from the containers. /var/logs, journallog and any logs from the applications.
How does fluentd gets these logs?
For getting access to all the logs in overall system, fluetd gets deployed as daemonset in the cluster. Daemonset deploys pods to each node with similar configuration. With the configurations these daemon set then reads all the logs from the cluster nodes.
Forwarding the logs
Once all the logs are collected by the daemonset it goes into the log-operator pipeline. Here we configure the pipelines to forward logs.
These pipelines have two sections
- input
- application
- infra
- audit
- output
- AWS Cloudwatch
- Splunk logs
- others like ELK
- Set up Fluentd as a DaemonSet to send logs to CloudWatch Logs is a nice read for setup.
Now we setup the output pipeline with proper end URL for AWS zonewise from AWS service endpoints
Once everything is setup properly, we will be able to see the logs coming up in the logstore of AWS cloudwatch. We can group them or search through them usign AWS log insight.