The last three posts have been focused on understanding how to calculate our TCAM requirements. Now that we understand how TCAM is allocated, it’s time to create class and policy maps for QoS classification and marking. I won’t display the whole set of ACLs and the QoS policy creation in this post – there will be a post detailing queueing on the 9k – but I will demonstrate how we may have to steal some space from one or more TCAM regions in order to fulfill our requirements.
I’ve created my IPv4 and IPv6 ACLs and the required class and policy maps. When I applied the service-policy to a vPC interface, I was presented with an unfortunate message:
2017 Feb 13 18:18:52 switch-9k-1 %$ VDC-1 %$ %ACLQOS-SLOT1-2-ACLQOS_OOTR: Tcam resource exhausted: Ingress L2 QOS [ing-l2-qos]
What happened? Remember from the first post, our default TCAM allocation for the ing-l2-qos region was rather small:
switch-9k-1# show hardware access-list tcam region NAT ACL[nat] size = 0 Ingress PACL [ing-ifacl] size = 0 VACL [vacl] size = 0 Ingress RACL [ing-racl] size = 1536 Ingress RBACL [ing-rbacl] size = 0 Ingress L2 QOS [ing-l2-qos] size = 512 Ingress L3/VLAN QOS [ing-l3-vlan-qos] size = 512 Ingress SUP [ing-sup] size = 512 Ingress L2 SPAN filter [ing-l2-span-filter] size = 256 Ingress L3 SPAN filter [ing-l3-span-filter] size = 256 Ingress FSTAT [ing-fstat] size = 0 span [span] size = 512 Egress RACL [egr-racl] size = 512 Egress SUP [egr-sup] size = 256 Ingress Redirect [ing-redirect] size = 0
And now it becomes obvious why we need to invest the time to determine our QoS TCAM requirements. I calculated that I would need 511 IPv4 and 518 IPv6 entries, for a total of 1029 TCAM entries – this exceeds capacity for the TCAM region.