I previously blogged about Starboard and How to Install and Use Starboard to Protect Your Kubernetes Cluster. These articles were focused more on vulnerability and configuration management. Now, I wanted to focus my attention on runtime security observability using Tetragon.

Getting Started With Tetragon

The first step is to install it. The Tetragon website recommends using Helm 3 to deploy it so that’s what we’ll do. I’m deploying with just the default values for now

 % helm repo add cilium https://helm.cilium.io
"cilium" has been added to your repositories

% helm repo update

Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "aquasecurity" chart repository
...Successfully got an update from the "cilium" chart repository
Update Complete. ⎈Happy Helming!⎈

% helm install tetragon cilium/tetragon -n kube-system

NAME: tetragon
LAST DEPLOYED: Sun Dec 17 13:48:00 2023
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None

%

After that, we’ll check on the install status:

 % kubectl rollout status -n kube-system ds/tetragon -w
daemon set "tetragon" successfully rolled out

% kubectl -n kube-system get pod
NAME                                 READY   STATUS    RESTARTS      AGE
cilium-6zrtw                         1/1     Running   0             22d
cilium-7fzqg                         1/1     Running   0             22d
cilium-operator-684fc7cc47-nqgs5     1/1     Running   0             22d
cilium-v4lt7                         1/1     Running   0             22d
...
tetragon-2twpw                       2/2     Running   0             87s
tetragon-b2d66                       2/2     Running   0             87s
tetragon-operator-5f5489bfd9-xkrcn   1/1     Running   0             87s
tetragon-wsgmg                       2/2     Running   0             87s

It looks like we’re up and running so now what should I do?

Checking Out the Execution Monitoring

I’ve been using my prod-mysql-0 pod in the ctesting namespace as my guinea pig for everything. Let’s try getting the execution events for this pod:

% kubectl exec -ti -n kube-system ds/tetragon -c tetragon -- tetra getevents -o compact --pods prod-mysql-0
🚀 process ctesting/prod-mysql-0 /bin/bash -ec "password_aux="${MYSQL_ROOT_PASSWORD:-}"
if [[ -f "${MYSQL_ROOT_PASSWORD_FILE:-}" ]]; then
    password_aux=$(cat "$MYSQL_ROOT_PASSWORD_FILE")
fi
mysqladmin status -uroot -p"${password_aux}"
" 
🚀 process ctesting/prod-mysql-0 /opt/bitnami/mysql/bin/mysqladmin status -uroot -<super_secret_password> 
💥 exit    ctesting/prod-mysql-0 /opt/bitnami/mysql/bin/mysqladmin status -uroot -<super_secret_password> 0 
🚀 process ctesting/prod-mysql-0 /bin/bash -ec "password_aux="${MYSQL_ROOT_PASSWORD:-}"
if [[ -f "${MYSQL_ROOT_PASSWORD_FILE:-}" ]]; then
    password_aux=$(cat "$MYSQL_ROOT_PASSWORD_FILE")
fi
mysqladmin status -uroot -p"${password_aux}"
" 
🚀 process ctesting/prod-mysql-0 /opt/bitnami/mysql/bin/mysqladmin status -uroot -<super_secret_password> 
💥 exit    ctesting/prod-mysql-0 /opt/bitnami/mysql/bin/mysqladmin status -uroot -<super_secret_password> 0 

That’s awesome! What does it mean? The short version is that I’m seeing the commands being executed by the container itself. This does not count commands executed when clients connect to the database. By the looks of this, I’m seeing execution of the status check for the container. It’s trying to get the root password from the environment variable MYSQL_ROOT_PASSWORD and then checks for a password file to get it from. From there, it runs the status command. This all seems normal.

Let’s try for something a little abnormal. I kept the monitoring going and then connected with exec to the container like this:

% kubectl exec -it -n ctesting  prod-mysql-0 -- /bin/bash 
Defaulted container "mysql" out of: mysql, git-sync (init)
I have no name!@prod-mysql-0:/$ curl www.google.com
bash: curl: command not found
I have no name!@prod-mysql-0:/$ wget www.google.com
bash: wget: command not found
I have no name!@prod-mysql-0:/$ ls -al
total 88
drwxr-xr-x   1 root root 4096 Dec  9 12:54 .
drwxr-xr-x   1 root root 4096 Dec  9 12:54 ..
drwxrwxr-x   2 root root 4096 Oct 29 16:30 .mysqlsh
drwxr-xr-x   2 root root 4096 Oct 29 16:28 bin
drwxr-xr-x   3 root root 4096 Oct 29 16:30 bitnami
drwxr-xr-x   2 root root 4096 Apr 18  2023 boot
drwxr-xr-x   5 root root  360 Dec  9 12:54 dev
drwxrwsrwx   3 root 1001 4096 Dec  9 12:54 docker-entrypoint-initdb.d
drwxr-xr-x   1 root root 4096 Dec  9 12:54 etc
drwxr-xr-x   2 root root 4096 Apr 18  2023 home
drwxr-xr-x   7 root root 4096 Oct 29 16:28 lib
drwxr-xr-x   2 root root 4096 Apr 18  2023 lib64
drwxr-xr-x   2 root root 4096 Apr 18  2023 media
drwxr-xr-x   2 root root 4096 Apr 18  2023 mnt
drwxrwxr-x   1 root root 4096 Dec  9 12:54 opt
dr-xr-xr-x 239 root root    0 Dec  9 12:54 proc
drwx------   2 root root 4096 Apr 18  2023 root
drwxr-xr-x   1 root root 4096 Dec  9 12:54 run
drwxr-xr-x   2 root root 4096 Oct 29 16:28 sbin
drwxr-xr-x   2 root root 4096 Apr 18  2023 srv
dr-xr-xr-x  13 root root    0 Dec  9 12:54 sys
drwxrwxrwt   1 root root 4096 Dec  9 12:54 tmp
drwxrwxr-x  11 root root 4096 Oct 29 16:29 usr
drwxr-xr-x  11 root root 4096 Oct 29 16:29 var
I have no name!@prod-mysql-0:/$ 
exit

Now let’s look at the execution events

🚀 process ctesting/prod-mysql-0 /opt/bitnami/mysql/bin/mysqladmin status -uroot -<super_secret_password> 
💥 exit    ctesting/prod-mysql-0 /opt/bitnami/mysql/bin/mysqladmin status -uroot -<super_secret_password> 0 
🚀 process ctesting/prod-mysql-0 /bin/bash -ec "password_aux="${MYSQL_ROOT_PASSWORD:-}"
if [[ -f "${MYSQL_ROOT_PASSWORD_FILE:-}" ]]; then
    password_aux=$(cat "$MYSQL_ROOT_PASSWORD_FILE")
fi
mysqladmin status -uroot -p"${password_aux}"
" 
🚀 process ctesting/prod-mysql-0 /opt/bitnami/mysql/bin/mysqladmin status -uroot -<super_secret_password> 
💥 exit    ctesting/prod-mysql-0 /opt/bitnami/mysql/bin/mysqladmin status -uroot -<super_secret_password> 0 
💥 exit    ctesting/prod-mysql-0 /bin/bash  127                  
💥 exit    ctesting/prod-mysql-0 /bin/bash  127                  
🚀 process ctesting/prod-mysql-0 /bin/ls -al                              
💥 exit    ctesting/prod-mysql-0 /bin/ls -al 0                   
💥 exit    ctesting/prod-mysql-0 /bin/bash  0                    
🚀 process ctesting/prod-mysql-0 /bin/bash -ec "password_aux="${MYSQL_ROOT_PASSWORD:-}"
if [[ -f "${MYSQL_ROOT_PASSWORD_FILE:-}" ]]; then
    password_aux=$(cat "$MYSQL_ROOT_PASSWORD_FILE")
fi
mysqladmin status -uroot -p"${password_aux}"
" 

Now THAT seems like a lot more fun! We can see that there are some additional commands being executed that we would not expect from the container. This is useful.

Checking Out the File Access Monitoring

Continuing the exploration, I also wanted to check out the file access monitoring. This is not enabled by default and requires you to provide a YAML file that lists out the files to be monitored. For this initial runtime security observability exercise, I’ll use the quickstart YAML file provided by Tetragon:

% kubectl apply -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/file_monitoring.yaml
tracingpolicy.cilium.io/file-monitoring-filtered created

With this in place, I’ll try accessing one of the monitored files /etc/shadow:

% kubectl exec -it -n ctesting  prod-mysql-0 -- /bin/bash
Defaulted container "mysql" out of: mysql, git-sync (init)
I have no name!@prod-mysql-0:/$ cat /etc/shadow
cat: /etc/shadow: Permission denied
I have no name!@prod-mysql-0:/$ 
exit
command terminated with exit code 1

While I was running the above commands, I also ran the gettevents command:

💥 exit    ctesting/prod-mysql-0 /bin/bash  1                    
🚀 process ctesting/prod-mysql-0 /bin/bash                                
📚 read    ctesting/prod-mysql-0 /bin/bash /etc/bash.bashrc               
📚 read    ctesting/prod-mysql-0 /bin/bash /etc/bash.bashrc               
🚀 process ctesting/prod-mysql-0 /bin/bash -ec "password_aux="${MYSQL_ROOT_PASSWORD:-}"

I only see bash running, I don’t see any reads to /etc/shadow because I got a permission denied. The reason for no read message is because I was denied access. Since I got denied, there were no read based events to log. I wonder what happens if we try to write something?

 % kubectl exec -it -n ctesting  prod-mysql-0 -- /bin/bash
Defaulted container "mysql" out of: mysql, git-sync (init)
I have no name!@prod-mysql-0:/$ echo "blah" /etc/bash.bashrc 
blah /etc/bash.bashrc
I have no name!@prod-mysql-0:/$ echo "blah" >> /etc/bash.bashrc 
bash: /etc/bash.bashrc: Permission denied
I have no name!@prod-mysql-0:/$ ls -al /etc/passwd
-rw-r--r-- 1 root root 922 Apr 18  2023 /etc/passwd
I have no name!@prod-mysql-0:/$ cd /root
bash: cd: /root: Permission denied
I have no name!@prod-mysql-0:/$ cd 
I have no name!@prod-mysql-0:/$ ls -al
total 88
drwxr-xr-x   1 root root 4096 Dec  9 12:54 .
drwxr-xr-x   1 root root 4096 Dec  9 12:54 ..
drwxrwxr-x   2 root root 4096 Oct 29 16:30 .mysqlsh
drwxr-xr-x   2 root root 4096 Oct 29 16:28 bin
drwxr-xr-x   3 root root 4096 Oct 29 16:30 bitnami
drwxr-xr-x   2 root root 4096 Apr 18  2023 boot
drwxr-xr-x   5 root root  360 Dec  9 12:54 dev
drwxrwsrwx   3 root 1001 4096 Dec  9 12:54 docker-entrypoint-initdb.d
drwxr-xr-x   1 root root 4096 Dec  9 12:54 etc
drwxr-xr-x   2 root root 4096 Apr 18  2023 home
drwxr-xr-x   7 root root 4096 Oct 29 16:28 lib
drwxr-xr-x   2 root root 4096 Apr 18  2023 lib64
drwxr-xr-x   2 root root 4096 Apr 18  2023 media
drwxr-xr-x   2 root root 4096 Apr 18  2023 mnt
drwxrwxr-x   1 root root 4096 Dec  9 12:54 opt
dr-xr-xr-x 246 root root    0 Dec  9 12:54 proc
drwx------   2 root root 4096 Apr 18  2023 root
drwxr-xr-x   1 root root 4096 Dec  9 12:54 run
drwxr-xr-x   2 root root 4096 Oct 29 16:28 sbin
drwxr-xr-x   2 root root 4096 Apr 18  2023 srv
dr-xr-xr-x  13 root root    0 Dec  9 12:54 sys
drwxrwxrwt   1 root root 4096 Dec  9 12:54 tmp
drwxrwxr-x  11 root root 4096 Oct 29 16:29 usr
drwxr-xr-x  11 root root 4096 Oct 29 16:29 var
I have no name!@prod-mysql-0:/$ touch .bashrc
touch: cannot touch '.bashrc': Permission denied
I have no name!@prod-mysql-0:/$ cd tmp
I have no name!@prod-mysql-0:/tmp$ touch .bashrc
I have no name!@prod-mysql-0:/tmp$ echo "blah" >> .bashrc 
I have no name!@prod-mysql-0:/tmp$ cat .bashrc 
blah
I have no name!@prod-mysql-0:/tmp$ rm .bashrc 
I have no name!@prod-mysql-0:/tmp$ 

It looks like I get denied to just about everything, this is a good thing. I was able to write a .bashrc file to /tmp so let’s see what this looks like

🚀 process ctesting/prod-mysql-0 /bin/cat .bashrc                         
📚 read    ctesting/prod-mysql-0 /bin/cat /tmp/.bashrc                    
📚 read    ctesting/prod-mysql-0 /bin/cat /tmp/.bashrc                    
📚 read    ctesting/prod-mysql-0 /bin/cat /tmp/.bashrc                    
📚 read    ctesting/prod-mysql-0 /bin/cat /tmp/.bashrc                    
💥 exit    ctesting/prod-mysql-0 /bin/cat .bashrc 0              
🚀 process ctesting/prod-mysql-0 /bin/rm .bashrc                          
💥 exit    ctesting/prod-mysql-0 /bin/rm .bashrc 0               
💥 exit    ctesting/prod-mysql-0 /bin/bash  0                    

That’s cool so I see some read operations happening on those files. This is useful to find out if things are being read or written to.

What’s Next?

Checking Out Kubernetes Cluster Network Access Monitoring

Now that I have execution and file access monitoring in place, I’ll want to see what’s going on in the network of my cluster. Before we configure this, we’ll need to gather some details, pod CIDR and service CIDR in use. As mentioned in many of my posts, I’m running on Digital Ocean so I’ll issue the below command to get my pod cidr (The Tetragon instructions mention finding this data in a different location):

% kubectl get nodes -o json |jq '.items[].metadata.annotations."io.cilium.network.ipv4-pod-cidr"'                  
"10.0.0.0/25"
"10.0.0.128/25"
"10.0.1.0/25"

In order to put these IP addresses to work, I downloaded the template from Tetragon for this policy and added my CIDR to the list

# output of "gcloud container clusters describe tetragon-benchmarking-oss --zone=us-west2-a | grep -e clusterIpv4Cidr -e servicesIpv4Cidr"
# clusterIpv4Cidr: 10.44.0.0/14
#  clusterIpv4Cidr: 10.44.0.0/14
#  clusterIpv4CidrBlock: 10.44.0.0/14
#  servicesIpv4Cidr: 10.48.0.0/20
#  servicesIpv4CidrBlock: 10.48.0.0/20
# servicesIpv4Cidr: 10.48.0.0/20

# For more information see: https://docs.isovalent.com/user-guide/sec-ops-visibility/workload-identity/index.html#egress-flow-to-suspicious-external-ip
apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
  name: "monitor-network-activity-outside-cluster-cidr-range"
spec:
  kprobes:
  - call: "tcp_connect"
    syscall: false
    args:
    - index: 0
      type: "sock"
    selectors:
    - matchArgs:
      - index: 0
        operator: "NotDAddr"
        values:
        - 127.0.0.1
        - 10.0.0.0/25
        - 10.0.0.128/25
        - 10.0.1.0/25

I then applied this to my cluster

 % kubectl apply -f network_egress_cluster.yaml 
tracingpolicy.cilium.io/monitor-network-activity-outside-cluster-cidr-range created

I then check the events again while I do a curl www.google.com

🚀 process default/nginx-1 /usr/bin/curl www.google.com                   
🔌 connect default/nginx-1 /usr/bin/curl tcp 10.244.0.223:52846 -> 142.250.189.196:80 
💥 exit    default/nginx-1 /usr/bin/curl www.google.com 0        

This is working so that’s great!

Filtering Out the Noise in Networking Events

I wasn’t feeling confident that I had the right filters in place. This is supposed to be an egress filter so I want to ignore internal traffic. Let’s get our events again but we’ll grep on connect

% kubectl exec -ti -n kube-system ds/tetragon -c tetragon -- tetra getevents -o compact |grep connect
🔌 connect wordpress/nginx-int-b8cccd77-npqkn /usr/bin/ssh tcp 10.0.0.139:42728 -> 192.30.255.113:22 
🔌 connect default/log-gen-59f94c9d86-h299t /usr/bin/ssh tcp 10.0.0.253:49110 -> 192.30.255.113:22 
🔌 connect wordpress/nginx-npp-588cc6b9cd-g2x5c /usr/bin/ssh tcp 10.0.0.221:33906 -> 192.30.255.113:22 
🔌 connect default/log-gen-59f94c9d86-h299t /usr/bin/ssh tcp 10.0.0.253:49126 -> 192.30.255.113:22 
🔌 connect default/nginx-1 /usr/bin/ssh tcp 10.0.0.223:35834 -> 192.30.255.113:22 
🔌 connect default/log-gen-59f94c9d86-h299t /usr/bin/ssh tcp 10.0.0.253:49140 -> 192.30.255.113:22 
🔌 connect wordpress/nginx-int-b8cccd77-npqkn /usr/bin/ssh tcp 10.0.0.139:42734 -> 192.30.255.113:22 
🔌 connect wordpress/nginx-npp-588cc6b9cd-g2x5c /usr/bin/ssh tcp 10.0.0.221:33918 -> 192.30.255.113:22 
🔌 connect default/log-gen-59f94c9d86-h299t /usr/bin/ssh tcp 10.0.0.253:49150 -> 192.30.255.113:22 
🔌 connect default/nginx-1 /usr/bin/ssh tcp 10.0.0.223:35846 -> 192.30.255.113:22 
🔌 connect default/log-gen-59f94c9d86-h299t /usr/bin/ssh tcp 10.0.0.253:49152 -> 192.30.255.113:22 
🔌 connect wordpress/nginx-int-b8cccd77-npqkn /usr/bin/ssh tcp 10.0.0.139:42748 -> 192.30.255.113:22 
🔌 connect wordpress/nginx-npp-588cc6b9cd-g2x5c /usr/bin/ssh tcp 10.0.0.221:33928 -> 192.30.255.113:22 
🔌 connect default/log-gen-59f94c9d86-h299t /usr/bin/ssh tcp 10.0.0.253:49160 -> 192.30.255.113:22 
🔌 connect default/nginx-1 /usr/bin/ssh tcp 10.0.0.223:35848 -> 192.30.255.113:22 
🔌 connect default/log-gen-59f94c9d86-h299t /usr/bin/ssh tcp 10.0.0.253:49174 -> 192.30.255.113:22 
🔌 connect wordpress/nginx-int-b8cccd77-npqkn /usr/bin/ssh tcp 10.0.0.139:42764 -> 192.30.255.113:22 
🔌 connect default/log-gen-59f94c9d86-h299t /usr/bin/ssh tcp 10.0.0.253:49188 -> 192.30.255.113:22 
🔌 connect wordpress/nginx-npp-588cc6b9cd-g2x5c /usr/bin/ssh tcp 10.0.0.221:33930 -> 192.30.255.113:22 
🔌 connect wordpress/wordpress-0 /opt/bitnami/php/sbin/php-fpm tcp 10.0.0.207:41246 -> 10.1.35.208:3306 
🔌 connect wordpress/wordpress-0 /opt/bitnami/php/sbin/php-fpm tcp 10.0.0.207:41250 -> 10.1.35.208:3306 
🔌 connect default/nginx-1 /usr/bin/ssh tcp 10.0.0.223:54866 -> 192.30.255.113:22 

It looks like I’ve got some traffic showing up that’s internal to my cluster going to 10.1.35.208 so I’ll want to ignore those events. I know these are coming from my service IPs so I’ll do a quick hack for it

% kubectl get svc -A |awk '{print $4}'
CLUSTER-IP
10.1.108.184
10.1.80.222
10.1.5.72
None
10.1.196.0
None
10.1.221.106
10.1.141.72
10.1.0.1
10.1.237.217
10.1.212.243
10.1.73.186
10.1.102.200
10.1.36.162
10.1.190.191
10.1.67.48
10.1.193.194
10.1.161.160
10.1.122.40
10.1.0.10
None
None
None
None
None
None
10.1.249.94
10.1.250.203
10.1.217.91
10.1.115.65
10.1.56.74
10.1.6.18
10.1.36.181
10.1.35.208

From a quick look, I see this spanning 10.1.0.0/16 so I’ll just add that to my YAML:

# output of "gcloud container clusters describe tetragon-benchmarking-oss --zone=us-west2-a | grep -e clusterIpv4Cidr -e servicesIpv4Cidr"
# clusterIpv4Cidr: 10.44.0.0/14
#  clusterIpv4Cidr: 10.44.0.0/14
#  clusterIpv4CidrBlock: 10.44.0.0/14
#  servicesIpv4Cidr: 10.48.0.0/20
#  servicesIpv4CidrBlock: 10.48.0.0/20
# servicesIpv4Cidr: 10.48.0.0/20

# For more information see: https://docs.isovalent.com/user-guide/sec-ops-visibility/workload-identity/index.html#egress-flow-to-suspicious-external-ip
apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
  name: "monitor-network-activity-outside-cluster-cidr-range"
spec:
  kprobes:
  - call: "tcp_connect"
    syscall: false
    args:
    - index: 0
      type: "sock"
    selectors:
    - matchArgs:
      - index: 0
        operator: "NotDAddr"
        values:
        - 127.0.0.1
        - 10.0.0.0/25
        - 10.0.0.128/25
        - 10.0.1.0/25

After editing that, I do another apply:

% kubectl apply -f network_egress_cluster.yaml                                                                        
tracingpolicy.cilium.io/monitor-network-activity-outside-cluster-cidr-range configured

Let’s check the events again:

% kubectl exec -ti -n kube-system ds/tetragon -c tetragon -- tetra getevents -o compact |grep connect
🔌 connect wordpress/nginx-int-b8cccd77-npqkn /usr/bin/ssh tcp 10.244.0.139:53492 -> 192.30.255.112:22 
🔌 connect default/log-gen-59f94c9d86-h299t /usr/bin/ssh tcp 10.244.0.253:51674 -> 192.30.255.112:22 
🔌 connect default/nginx-1 /usr/bin/ssh tcp 10.244.0.223:46870 -> 192.30.255.112:22 
🔌 connect wordpress/nginx-npp-588cc6b9cd-g2x5c /usr/bin/ssh tcp 10.244.0.221:42074 -> 192.30.255.113:22 
🔌 connect default/log-gen-59f94c9d86-h299t /usr/bin/ssh tcp 10.244.0.253:51676 -> 192.30.255.112:22 
🔌 connect default/log-gen-59f94c9d86-h299t /usr/bin/ssh tcp 10.244.0.253:35328 -> 192.30.255.113:22 
🔌 connect wordpress/nginx-int-b8cccd77-npqkn /usr/bin/ssh tcp 10.244.0.139:53500 -> 192.30.255.112:22 
🔌 connect default/nginx-1 /usr/bin/ssh tcp 10.244.0.223:46884 -> 192.30.255.112:22 
🔌 connect default/log-gen-59f94c9d86-h299t /usr/bin/ssh tcp 10.244.0.253:35334 -> 192.30.255.113:22 
🔌 connect wordpress/nginx-npp-588cc6b9cd-g2x5c /usr/bin/ssh tcp 10.244.0.221:33802 -> 192.30.255.112:22 
🔌 connect default/log-gen-59f94c9d86-h299t /usr/bin/ssh tcp 10.244.0.253:51680 -> 192.30.255.112:22 
🔌 connect wordpress/nginx-int-b8cccd77-npqkn /usr/bin/ssh tcp 10.244.0.139:53506 -> 192.30.255.112:22 
🔌 connect default/log-gen-59f94c9d86-h299t /usr/bin/ssh tcp 10.244.0.253:51692 -> 192.30.255.112:22 
🔌 connect default/nginx-1 /usr/bin/ssh tcp 10.244.0.223:47142 -> 192.30.255.113:22 
🔌 connect default/log-gen-59f94c9d86-h299t /usr/bin/ssh tcp 10.244.0.253:49710 -> 192.30.255.113:22 
🔌 connect wordpress/nginx-npp-588cc6b9cd-g2x5c /usr/bin/ssh tcp 10.244.0.221:44334 -> 192.30.255.113:22 
🔌 connect default/log-gen-59f94c9d86-h299t /usr/bin/ssh tcp 10.244.0.253:35322 -> 192.30.255.112:22 
🔌 connect wordpress/nginx-int-b8cccd77-npqkn /usr/bin/ssh tcp 10.244.0.139:34810 -> 192.30.255.113:22 
🔌 connect default/nginx-1 /usr/bin/ssh tcp 10.244.0.223:38146 -> 192.30.255.113:22 
🔌 connect kube-system/do-node-agent-m2h2g /bin/do-agent tcp 206.189.208.210:50788 -> 169.254.169.254:80 
🔌 connect default/log-gen-59f94c9d86-h299t /usr/bin/ssh tcp 10.244.0.253:49722 -> 192.30.255.113:22 
🔌 connect wordpress/nginx-npp-588cc6b9cd-g2x5c /usr/bin/ssh tcp 10.244.0.221:44344 -> 192.30.255.113:22 
🔌 connect default/log-gen-59f94c9d86-h299t /usr/bin/ssh tcp 10.244.0.253:49732 -> 192.30.255.113:22 
🔌 connect wordpress/nginx-int-b8cccd77-npqkn /usr/bin/ssh tcp 10.244.0.139:34820 -> 192.30.255.113:22 
🔌 connect default/nginx-1 /usr/bin/ssh tcp 10.244.0.223:38158 -> 192.30.255.113:22 
🔌 connect default/log-gen-59f94c9d86-h299t /usr/bin/ssh tcp 10.244.0.253:49740 -> 192.30.255.113:22 
🔌 connect wordpress/nginx-npp-588cc6b9cd-g2x5c /usr/bin/ssh tcp 10.244.0.221:44350 -> 192.30.255.113:22 
🔌 connect default/log-gen-59f94c9d86-h299t /usr/bin/ssh tcp 10.244.0.253:49744 -> 192.30.255.113:22 

Ok so this looks pretty good.

Wrapping Up

Ok so this blog post was just supposed to be a quick introduction into runtime security observation and I realize that it got away from me 🙂

It looks like I’ve got some events coming through into the system but I think I need to revisit tuning these events. I’ll create another blog post that explains some additional tuning for this beast.