Friday, January 26, 2024

CCIE Coffee Blogs: #3268 Kujtim Tali

 This is the very first blog post of a series of posts about Albanians that have achieved the much wanted CCIE status. It's meant to provide some background info for the CCIE Hall of Fame for Albania and Kosovo. I've decided to call the series CCIE Coffee Blogs, as most content here will be non-technical. The goal is to inspire young Albanians to pursue similar paths like our much respected guests. Dall-E helped med turn my idea into the following logo. 

Fig.1 CCIE Coffee Blogs

Meet Our First Guest: Kujtim Tali

Our first guest is Kujtim Tali. Kujtim is the first Albanian that reached the CCIE status already in 1997. You can tell by his number 3268, that there were only a couple of thousand people that had passed the exam back then, while now we have over 70 000 people that have. I've had the honor to direct some questions to him about his background as well as current insights. 


Fig.2 Kujtim Tali CCIE #3268

1. How did you start your career in networking? What is your educational background?

I began my career in networking after completing my undergraduate degree in Electrical Engineering at the University of Prishtina. Following this, I pursued a Master's degree in Computer Engineering at the Illinois Institute of Technology.

2. What inspired you to pursue the CCIE certification? What were the biggest challenges you faced?

My inspiration to pursue the CCIE certification came during the era when IP was becoming a dominant force in networking. I started with Novell NetWare and was among the first to earn the Certified Novell Engineer (CNE) and Master CNE certifications. Transitioning to IP networks, X.25, Frame Relay, ATM...... and Cisco products was a natural step for me. The biggest challenges I faced were the scarcities of materials and equipment. I was fortunate to work with major Telecom operators and Equipment Vendors like ATT, Juniper, Tellabs, and Cisco.  

3. What is your current job role? How does your CCIE certification contribute to your daily work?

Currently I am founder and chairman of the board of the Telecom Infrastructure Services Company, 3CIS, I attribute a percentage of our success to the experience I gained after getting my CCIE certification. This certification extended my roles at major telecom companies, laying a solid foundation for my professional journey. These roles were important in building an extensive network of contacts and expertise, which later became crucial for 3CIS. In essence, the CCIE certification was not just a personal achievement; it was one of the cornerstones that enabled me to establish and grow 3CIS into what it is today. The relationships and experiences I gained as a result of this certification played a central role in acquiring the clientele that continue to be fundamental to 3CIS.

4. What advice would you give to those aspiring to achieve the CCIE certification? 

My advice to aspiring CCIEs or really anyone would be to pursue your dreams and not to lose focus on what you want to build for yourself, no matter the obstacles. 

5. What are your professional goals for the future? How do you plan to advance in your career?

My goal in the future is to conitnue mentoring and supporting upcoming engineers in achieving their goals. It is a commitment of mine to contribute to the growth and success in the field. 

6.  Can you share an interesting fact about yourself or a hobby you enjoy outside of work?

Throughout my career, I've had commitments all over the world, leading to over four million miles of travel. This has made me miss a lot of valuable family time. So my current hobby is simply my family.

In talking with Kujtim, it's clear that combining the CCIE with dedication to building a professional network is the way forward. Through the 'CCIE Coffee Blogs', I look forward to bringing you more inspiring stories from the CCIE Hall of Fame, focusing on the achievements of Albanian professionals in this challenging field of networking.


Wednesday, December 13, 2023

Shared Responsibility Model: #cissp and pizza

This blog post focuses on the shared responsibility model with focus on cloud. There is nothing better than explaining a complex topic by using an analogy to something we all love, the pizza. 

What is the Shared Responsibility Model?

The shared responsibility model is used by service providers, primarily in cloud, to define who is responsible for the services and resources. It's very important to understand it, as there is some misconception amongst customers on who has the responsibility for information, data, network, operating system and the physical elements when their workload is shifted towards cloud. The following diagram illustrates the areas of responsibility between Microsoft and customers when deploying resources on the cloud or onprem. 


Fig.1 Shared Responsibility Model

The key thing to understand in this model is that any workload you move to the cloud doesn't move your responsibility fully to the cloud provider. In the diagram on Fig.1 you can see that the model with less responsibility on your side is SaaS, while the one where you are fully responsible is once your resources are 100% on-prem. I meet customers almost on a daily basis who don't understand this model. Some of them think that moving workload to the cloud means automatically that you have backup, security, redundancy built in. Even though the cloud does facilitate these services, it is your responsibility to activate and maintain them. The worst scenario I've personally experienced was a relatively big company in Europe having some workload in a datacenter in UK, without any backup or security, who ended up deleting the whole workload with one click. This was possible as that datacenter had a "red button" that would decommission everything at once. You can imagine how hard it was to restore the services afterward. 

Despite the simplified diagram in the picture above, the reality is that a lot of companies still don't fully understand the model and have problems with translating it to their own services. In order to simplify this, we will be using another well known shared responsibility model, based on pizza. 

An analogy we all love

Let's imagine that we have built Pizza as a Service. You can get it delivered (fully managed service), pick up a pizza that is ready to bake at the closest supermarket (partially managed), or just go all in and make one from scratch at home (unmanaged service). This is similar to the choices you have with IT services:

Fig.2 Pizza as a Service

Fully Managed : Here, the service provider handles everything. In the pizza world, this is like ordering a pizza via phone and getting it delivered, hot and ready to eat. In IT Services, this is the same as fully managed services where the provider takes care of the infrastructure, security, and maintenance. This is similar to the SaaS service in the diagram above. 

Partially Managed : This is us going to the closest supermarket and buying a ready to bake pizza. We are responsible for baking it at home, but the pizza is ready, we don't need to make the dough, the topping etc. In IT Service terms, this means the service provider manages the infrastructure, but you're responsible for some aspects of the configuration and security. This would Infrastructure as a Service and Platform as a Service.

Unmanaged: This is us making a pizza from scratch at home. We need all the ingredients, oven, and the skills. Similarly, in an unmanaged IT service, you're responsible for all aspects. This would be us bying a server for our private datacenter, installing OS, preparing networking, installing applications and so on. We have full responsibility to maintain and update it, and we need the skills in-house. 

How does all this tie to CISSP?

As you might already know, CISSP is built on different domains. Even though the shared responsibility model is not directly tied to the specific domains, it can easily be translated to some elements that can be tied to them.

Security and Risk Management domain focuses on understanding and managing risks. In our analogy from before, we need to know what risks we have on each level of service, whether it's fully managed, partially managed or unmanaged. It's very important to evaluate the security measures that are taken by the provider and the ones that we as a customer need to implement.

Asset Security domain on the other end focuses primarily on protecting the assets, which could be translated to data, applications and infrastructure. We need to ensure security measures have been taken on each asset either by us, or the provider.

Security Architecture and Engineering is mostly focused on design and implementation of security architectures. We need to understand each model, from SaaS, IaaS, PaaS, on-prem so that we can evaluate the impact each one of them has on the security responsibilities. This will help us design secure architectures wherever our workload resides. 

Communication and Network Security is also very critical in Cloud Services. It's critical to make sure that we have secure networking and transmissions in the cloud and from cloud to on-prem. We need to make sure that the pizza we get delivered (data transmission) doesn't get messed up on the way to our door. We need to understand how much of the network security is the provider's responsibility and how much is ours. 

We could go on and provide further considerations from the rest of the chapters, but as long as you understand the model it should be possible to make informed decisions, ensuring that both you as a customer and your provider play the part in maintaining and securing the environment. 

Conclusion

In conclusion, the Shared Responsibility Model in cloud computing, much like our pizza analogy, reveals the importance of understanding the various layers of responsibility, whether you're choosing a cloud service or deciding what kind of pizza you are eating for dinner. Just as you would decide between ordering a fully prepared pizza or baking one from scratch, in the cloud environment, you need to make sure your choices take into consideration the security aspects that you will manage and the ones that your provider will. 

References

Shared Responsibility Model - Amazon Web Services (AWS)

Shared responsibility in the cloud - Microsoft Azure | Microsoft Learn

(ISC)2 CISSP Certified Information Systems Security Professional Official Study Guide

Friday, November 10, 2023

The evolution of switches

The evolution of switches: #cissp insight

Did you know that switches were already introduced in the 19th century? Voice switches, also known as circuit switches, were the backbone of communication networks. Invented in the late 19th century, these switches established a dedicated circuit between two points for the duration of a call. While effective, this method was resource-intensive, limiting the number of simultaneous calls that could be handled. Early switches required also significant human intervention for line establishment, posing security and reliability challenges. In the next sections, we will deep dive into the most important elements of switches and the security concerns we have today. 




Fig.1 The transition from voice switching to packet switching


When did it all start?

The 1970s and 1980s marked the beginning of a new era with the introduction of packet switching, a concept which initially began with the invention of ARPANET, the predecessor of the modern internet. Unlike circuit switching, packet switching sends data in smaller packets through various routes before reassembling at the destination. This approach optimizes network efficiency and resource utilization. 

Our office museum has a telephone switchboard, which you can see at the following picture:



Fig.2 Office museum telephone switchboard

Why packet switching?

  1. Improvement of efficiency: Packet switching allows multiple communications to share the same network bandwidth, significantly enhancing network efficiency compared to the dedicated paths of voice switching. If you were to compare it to a transition that is happening today, it would be the switch over from dark fiber to MPLS and later on to SD-WAN. 
  2. Scalability: As the demand for data services grew, packet-switched networks proved to be highly scalable, accommodating more users and diverse data types without the need for expansion of the underlying infrastructure. 
  3. Cost Optimization: The shared network approach of packet switching reduces operational costs. It eliminates the need for dedicated circuits for each communication, lowering both capital and maintenance expenses. This is also similar to the transition to SD-WAN. 
  4. Additional capabilities: The switchover meant that the traditional circuits would also open up for other kinds of traffic than voice, which is basically what happened in the next few years.
  5. Resilience and Flexibility: Together with opening for other kinds of traffic, new capabilities were also introduces, like the ability of packet-switched networks to reroute data through multiple pathways, which enhances network resilience against failures and congestion.

What has changed?

Since then a lot of things have changed. Today we have an unthinkable transmission speed, from a few bps to gigabit or terabitps speeds. The initial packet switches didn't have any modern protocols supported, like MPLS, QoS etc. Their main purpose was to do basic data routing. Today we can see switches that also do dynamic routing, switching, have firewall capabilities and even VPN or IDS features. With the recent developments of AI we expect more machine learning and predictive analytics functionality, as well as possibility to automate troubleshooting and optimize networks. It will also help with securing that these systems are configured in alignment with the organization security policies. 

What are some of the most common attacks against switches?

As this is a CISSP focused post, it's normal to discuss the security aspect. Historically the focus was in external threats to the company, but now we need to implement zero-trust principles as internal threats are equally important.


Fig 3. Switch Security


 Some of the security concerns we face today are:

  1. MAC flooding, which basically is going to fill the MAC address table of the switch, causing it to become a hub and broadcasting packets to all ports. A hacker could sit on one of the ports and mirror the traffic to his PC. 
  2. ARP poisoning. In this case, the hacker would send a fake ARP response over a local area network to behave as if they were one of the legitimate hosts. Years ago I experimented with an old rooted android phone. By using an app to poison ARP, I was able to hijack sessions and change all pictures on all websites that my colleagues visited at the office. Fun times...
  3. VLAN hopping. The hacker would add some extra VLAN tag to the packet, so that when the actual vlan tag was removed, the switch would see this hacker tag and send the traffic to a different VLAN than the original one. 
  4. STP manipulation. Knowing how Spanning Tree does the root bridge election, the hacker would install a switch in the network and make it the root bridge, which would then send all traffic through it. This can be a devastating attack and can happen also due to a mistake. I've experienced shortage on multiple cities many years ago, because some misconfigured switch was connected by some end customer to the ISP network. Then the traffic for all those cities would tend to go through this single location and get black holed. This would come handy to a potential attacker, as all traffic would go through their switch, and they would have the possibility to perform Man in the Middle attacks. 
  5. DoS and Vulnerabilities. The attacker could either overload the switch with traffic, or exploit some vulnerability. Vulnerabilities have more and more focus nowadays, considering the increasing threat landscape.

How do we mitigate them?

There are several mechanisms we can use to protect our switches. Here come some of the most important ones:
  1. Port Security. This is one of the first things that should be implemented in a modern network. Unauthorized devices should be prevented to connect to a network and the amount of MAC addresses learned through the ports should be limited, to avoid MAC flooding. 
  2. Dynamic ARP inspection. To prevent poisoning of ARP and the possibility to hijack sessions like explained on point 2 above, ARP packets should be inspected, and the invalid ones should be blocked.
  3. Secure VLANs. There are several things to consider, but default and native VLANs should not be used, and access lists should be implemented to isolate traffic between networks. 
  4. STP Root Guard and BPDU Guard. These mechanisms would prevent an unknown switch from connecting to our network and making devastating changes, like the ones explained on point 4 above.
  5. Regular updates. The only way to protect from vulnerabilities is to keep the software updated and reduce our exposure. Back in the old days, it was normal to brag about how many years a Cisco device had been on for. This is no longer an option today, and in the best case it should be mentioned as something to avoid. We still face gaps in this area when we analyze customer networks. The best option would be to implement switches that can be updated automatically. 
  6. Network Segmentation. The network should be divided to smaller subsets, like VLAN's and private VLANs. Furthermore, the traffic between these subsets should be limited by access lists. Ideally, the network should filter by TCP/UDP port and IP. 
There are many more mechanisms that can be implemented to increase security, which we will not address on this post, but having most of what is mentioned here in place is a good first step towards securing the internal domain. 

Conclusion

The modern day switches come from an invention that started back in the 19th century. Since then a lot has happened, which testifies the human effort and desire to improve. It also reflects an adaptation to the growing demands of a digitally connected world, offering improved efficiency, scalability, and expansion of communication capabilities. The introduction of AI is going to push further into the simplification and increase of capabilities towards a more digitized world, as well as simplify the secure implementation of them in accordance to the company policies. It will also pose new security risks, which we need to address. It's important to implement security with zero trust principles and take extra steps to secure the internal domain. 

References

(ISC)2 CISSP Certified Information Systems Security Professional Official Study Guide: CISSP Domain 4: Communication and Network Security

General MS Best Practices - Cisco Meraki

Image courtesy: DALL-E

Thursday, October 19, 2023

Meraki Source NAT and IP aliases

Meraki Source NAT and IP Aliases Features: An Overview

This post provides a look into Meraki's NAT for inter-vlan traffic. There have been some rumors in the forums discussing this feature, but since it's still a hidden/beta feature, there is no documentation available, which is why I decided to write about it. 

What is Source NAT? (Beta)

In simple terms, Source NAT is a mechanism that modifies the source IP address of traffic as it moves between domains, whether it's between LAN and WAN, VLANs or VPN sites. This is particularly beneficial in environments where there's a need to mask the original source or destination IP address of a device in one domain when communicating with devices in a different domain.

What is IP Alias? (Beta)

I like the fact that Meraki has decided to call this IP Aliases, as it's basically destination NAT combined with Proxy ARP. Our traffic would be destined to an IP within our current VLAN, which is not in use, and the Meraki firewall will translate the destination to an IP in another VLAN. The MX will respond to ARP requests as if the IP was assigned to it (Proxy ARP). 

Activating the Features

Since they're beta features, both inter-VLAN Source NAT and IP Alias aren't immediately accessible. To enable them, you need to:
  1. Contact Meraki Support.
  2. Express interest in the beta feature and explain the specific scenario you are willing to solve. 
  3. Wait for it to be activated on your dashboard.
Remember, beta features are still under testing, so ensure you have proper backups and testing procedures before implementing them in production environments.

Configuring Source NAT and IP Alias on Meraki

Once the features are activated, the configuration is quite straightforward:

  1. Navigate to the Meraki Dashboard: Log in and choose the network you want to configure.
  2. Go to the Security & SD-WAN -> Addressing & VLANs and choose the specific VLAN, in our previous example we would choose the red VLAN. 
  3. Hit Next on the first 2 screens related to generic VLAN config and IPv4 config, and you will reach the section on figure 2. 
  4. Enable Source NAT traffic and pick the Source NAT VLAN. 
  5. You have the possibility to enable IP aliases, which requires a source and destination IP. Even though it's called source, it's actually the destination as seen from the host that initiated the traffic. The Destination IP is the IP in the target VLAN we aim to direct traffic towards.
  6. Save and Test: After configuring, save the changes and start testing to ensure that the NAT is functioning as expected.

Here comes an example configuration:

Fig.1 Source NAT and IP aliases configuration

Use cases

The most traditional use of Source NAT is when translating our private IP's to a single public IP, in order for us to be able to reach the internet. Another scenario would be in translating our traffic as it goes through a VPN, to avoid duplicate IP's between the 2 parties, or just to comply with the addressing schemes. A less traditional use would be to translate the source traffic between 2 networks in our environment. It's this last scenario that we are interested in.

Let's consider 2 VLAN's in our domain, where one is using the MX as gateway, while the other one is using a third party firewall, according to the following diagram. The Meraki MX has the blue and the red VLANs directly attached. Traffic going from the blue VLAN towards a device in the red VLAN, would reach the destination without issues, however return traffic would be sent to the third party firewall, instead of the MX, since the gateway for the blue VLAN is pointing to that one. The third party firewall would have the job to return the traffic back to us. For this to work, we would need a static route with destination the blue VLAN using the MX as next hop. Let's suppose that we don't control this firewall, so we can't configure the static route. 



 
Fig 1. Traffic with and without Source NAT on the MX

In this case we could use Source NAT. We would translate the source of the traffic to be the MX IP on the red VLAN. When traffic would reach the devices on the red VLAN, it would think that we reside within the same VLAN, so it would send the traffic back to the MX, instead of it's gateway, the third party firewall. Now let's discuss how we do this on an MX firewall. 

IP Aliases on the other hand are used to hide the destination of the traffic. Instead of sending traffic directly to the destination, we would send it to a local IP within our own VLAN. Our MX would use proxy ARP to respond to the initiator of the traffic, as if this IP was assigned to a local interface. The MX would then translate this further to the destination VLAN/IP. You can see a simplified illustration in the Fig.2 below. 

Fig 2. Traffic sent to the IP alias on the Blue VLAN


What are the benefits?

In most cases, you would want to avoid using Source NAT or IP aliases in the internal domain, since it will hide some of the traffic as seen from the source or the destination. This might pose some security concerns, as logs might not record the original IPs. We might not be able to track the actual source and destination, which might as well be an internal threat. It could still be used in some scenarios though:

Privacy and Security: In environments where device anonymity is crucial, such as in labs or testing facilities, masking the original IP can add another layer of security. This would provide privacy as well as security by obscurity to some extent. 

Network Migrations: When migrating from a firewall configured with source NAT between VLANs, we might want to do a one to one replacement and introduce changes in a later moment. In this case, it can be handy to have all features supported. 

Conclusion

MX firewalls are great for edge use, but they are not widely used in the internal domain for internal segmentation purposes. By introducing more features similar to the ones supported on ASA, Secure Firewall and other third party firewalls, we will see MX eventually become an Internal Segmentation Firewall.  While these features are still in beta, the results seem promising. While source NAT and IP aliases might not be a great idea for most scenarios, it can still be used to provide security by obscurity or to ease migrations from firewalls that already support these features.

References





Friday, July 14, 2023

Meraki NAT Exceptions and Inbound Firewall

This blog post focuses on 2 Meraki MX beta features, NAT Exceptions and Inbound Firewall. We will explain how to enable and configure them in your environment. The purpose is to provide extra means for you as a network or security technician to achieve smooth transition from other Cisco or third party firewalls to full-stack Meraki. 

Nat Exceptions (Beta)

Nat Exception or "No-NAT" is a feature designed for situations where you want certain traffic to bypass Network Address Translation on an MX security appliance. This is particularly useful when you have traffic that needs to maintain the original source IP address for proper routing or functionality, such as in a site-to-site VPN or MPLS topology. The NAT Exception allows specific internal IPs to send traffic without undergoing NAT, preserving the original source IP address. NAT is applied by default for traffic from LAN to WAN on an MX, but is disabled for traffic routed on the LAN side of the firewall (e.g. MPLS) or for Auto-VPN traffic.

Prerequisite

Meraki support needs to be contacted via phone to enable this feature. 

Configuration

After Meraki support has enabled the feature on your network, you can configure it by following these steps. Check Figure 1 for more details. 

  • Navigate to Security & SD-WAN > Addressing and VLANs.
  • At the bottom of the page you will find NAT Exceptions
  • Choose the uplink where you want to disable NAT
  • Choose if you want to Override NAT per VLAN, in case you have several VLANs
  • Override the NAT Config per VLAN if it has to be different based on the uplink




Fig 1. Example of Nat Exception configuration

Note

Remember that exempting NAT will affect all traffic going out through the specific uplink, which could potentially disrupt internet traffic for all your internal networks. 

Inbound Firewall (Beta)

The Inbound Firewall feature provides extra flexibility for your Meraki MX security appliance. This feature allows you to specify which inbound connections from the WAN to the LAN are permitted. The default behavior in an MX WAN is to allow returning LAN traffic (established) and any traffic which has been forwarded via NAT rules. The rest is blocked by default. Inbound Firewall rules are particularly useful when you want to limit the incoming traffic to specific IP addresses and/or ports, providing a higher level of granularity for your specific needs. 

Prerequisite

Same as with NAT Exceptions, Meraki support needs to be contacted via phone to enable the feature. 

Configuration

Navigate to Security & SD-WAN > Firewall. 

Two new sections for Layer 3 Inbound Rules and Inbound cellular failover rules have been added to the configuration. 

The configuration is pretty straight forward. 

  • Policy: Allow or Deny
  • Description: Provide a description of the rule.
  • Protocol: TCP/UDP/ICMPv4/v6 or Any
  • Source IP/Network
  • Source Port
  • Destination IP/Network
  • Destination Port


Fig 2. Example of Inbound Firewall configuration

A few notes

The rules are limited to inbound traffic from WAN/Cellular in their respective sections. This can not be used for traffic between vlans or routed on the inner side of the firewall. In that case, outbound rules must be used. 

After enabling Inbound Firewall, all inbound traffic through the WAN/Cellular is allowed by default, so it's crucial to implement layer 3 firewall rules immediately to prevent potential security breaches. Warnings appear on both the Inbound Firewall and Security Appliance sections. 

Warning: Cisco Meraki Support has enabled the use of custom layer 3 inbound firewall rules which defaults to "allow all" behavior unless configured otherwise. Settings previously designated under "Security appliance services" should be configured as explicit firewall rules (e.g. adding a rule to block TCP over port 80 to restrict access to the local status page). We recommend that you configure layer 3 inbound rules to whitelist authorized network routes and append a "deny all" rule to avoid exposing Meraki host services.

Security Appliance Services: Normally, these fields control what services are available from remote IPs, e.g., ICMP ping, web (local status & configuration), and SNMP. However, because Cisco Meraki Support has enabled the use of custom layer 3 inbound firewall rules on this network, remote access to security appliance services will be restricted according to the layer 3 inbound rules configured above.

Use Cases

Depending on your current scenario and problem you are trying to solve, it can be handy to have some extra enterprise features like NAT Exception and Inbound Firewall in a Meraki MX. While the MX appliance running in Routed(NAT) mode is often associated with edge or WAN-side functionality (including acting as an Internet gateway, providing VPN functionality, and serving as a firewall), it can also be effectively used within the LAN for purposes like network segmentation. It can enforce security policies and control access between different VLANs or network segments, effectively acting as an internal firewall. This combined with IDS/IPS, Advanced Malware Protection and URL filtering will block the attacks very close to the source. 

Is this supported?

Since these are beta features, there is some risk for them to be unstable. The NAT Exceptions appeared already back in the beta version 15.4, so it has been there for a while.Several scenarios have been tested in my lab environment and no unexpected results have been encountered. Meraki supports by default the most recent Beta releases of their software as well as older versions with best effort.

Conclusions

While the Nat Exceptions and Inbound Firewall features offer greater flexibility and control, they can potentially expose your internal resources or disrupt your services if not configured properly. Always plan and assess your network needs carefully, considering both functionality and security when you decide. 

References

General MX Best Practices

MX and MS Basic Recommended Layer 3 Topology - Cisco Meraki