Category: Networking

  • SSH. Part Duo.

    Securing Linux SSH with Duo two-factor authentication.

    A few days ago, I was chatting with someone about authentication. They mentioned Cisco Duo. I’ve used Duo before and knew it as an Enterprise-grade Identity/MFA provider. But I did not realize that Duo offered an “Enterprise. At home” tier of service with their “Duo Free” edition. In this post we’ll revisit the securing SSH post and will use Duo for our 2-Factor Authentication.

    Duo offers multiple editions based on the number of users that the installation will support and the features available. For Duo Free, the limit is 10 users. That should be fine with our use case.

    Duo provides a ton of excellent and easy to follow documentation instructions. They integrate with many products and make configuration easy. The integration that we’re interested in, for this post, is Duo Unix, documentation is found here.

    Set up

    Before we begin, we need to do some pre-requisite ground work. Unlike my other SSH post, we’ll be using a RHEL derivative AlmaLinux instead of Debian. I won’t bore you with the installation steps for AlmaLinux, but assume that it is version 10, minimal install.

    And, of course, you need to create an account with Duo. Again, it’s pretty simple, so I trust you can do it.

    Let’s configure Duo side first as we’ll need some configuration parameters for our integration.

    Duo

    If you don’t need the 30-day trial Duo offers you, you can switch to Duo Free. To do so, go to Billing -> Billing on the left, scroll down and select “Duo Free” for “Edition” under the “Manage Subscription” section. Accept Terms and Conditions and click the blue “Update Subscription” button.

    Next, in Application -> Application Catalog, we’ll search for “Unix Application” and will add it to our environment.

    In the new screen, we’ll name our application, select “Enable for all users”, make any changes for the phone greeting. This screen can be found in Applications -> Applications. In the “UNIX Application” screen you’ll see the “Integration key”, Secret key”, and “API hostname”. We’ll use these values when we set integration up on our Linux server.

    Linux SSH

    Now, we need to configure our Linux server. First, because we have a minimal installation, we need to install the required packages. Because we’re using a RHEL derivative (AlmaLinux), we’ll use dnf. We need to install gcc as we’ll be compiling the duo_pam module from source.

    sudo dnf install wget openssl-devel pam-devel selinux-policy-devel bzip2 tar gcc nano -y

    Now, download and extract the source code. Make sure you’re using the actual names of the .tar.gz file as it’ll change as the new version is released.

    wget --content-disposition https://dl.duosecurity.com/duo_unix-latest.tar.gz
    tar zxf duo_unix-2.2.1.tar.gz
    cd duo_unix-2.2.1

    Now, compile and install the Duo PAM module.

    ./configure --with-pam --prefix=/usr && make && sudo make install

    After the installation, let’s take care of SELinux, if you have it in Enforcing mode. Make sure to run these commands in the duo_unix-2.2.1 folder, the folder where we extracted the source code to.

    sudo make -C pam_duo semodule
    sudo make -C pam_duo semodule-install
    sudo semodule -l | grep duo 

    The last command should return a line authlogin_duo, indicating the SELinux module is installed.

    It’s time to configure Duo to allow for 2-factor authentication. Edit the pam_duo.conf file by running sudo nano /etc/duo/pam_duo.conf. Use the screen in Figure 4 to get the ikey (Integration key), skey (Secret key), and host (API hostname) values. Uncomment pushinfo = yes by removing a ; and add autopush=yes and prompts = 1. This will enable push notification to a phone so all you need to do is click a green “Approve” check mark button (but you need to do so on the first try).

    Now, we’ll configure SSH and PAM. First, we’ll configure SSH to use password authentication + Duo 2FA for SSH. sudo will NOT require 2FA. Run sudo nano /etc/ssh/sshd_config and make the following changes to the SSH config (add the line AuthenticationMethods as it doesn’t exist by default):

    PubkeyAuthentication no
    PasswordAuthentication no
    UsePAM yes
    KbdInteractiveAuthentication yes
    UseDNS no
    AuthenticationMethods keyboard-interactive

    Next, edit with sudo nano /etc/ssh/sshd_config.d/50-redhat.conf and comment out the line ChallengeResponseAuthentication no with a #.

    Edit the SSH PAM configuration with sudo nano /etc/pam.d/sshd and make sure the auth section is as follows:

    auth	   required     pam_sepermit.so
    auth	   required pam_env.so
    auth	   requisite    pam_unix.so nullok try_first_pass
    auth	   sufficient /lib64/security/pam_duo.so
    auth	   required pam_deny.so

    Now we can reboot, cross our fingers, and try to log back in.

    We should be greeted with a normal SSH login. But after putting the correct password in, we’ll get a Please enroll at https://<looong sting>. Copy the URL to your browser and follow the directions. Then try log in again and you should receive a Push notification on your phone. (I had a small issue after enrolling into Duo with sshd complaining about exceeded LoginGraceTime. Restarting sshd with sudo systemctl restart sshd solved this issue).

    Public Key Authentication

    To use public keys together with Duo, we need to make some modifications to our configuration. First, in /etc/ssh/sshd_config change PubkeyAuthentication no to PubkeyAuthentication yes and AuthenticationMethods keyboard-interactive to AuthenticationMethods publickey,keyboard-interactive.

    In /etc/pam.d/sshd comment out the line auth requisite pam_unix.so nullok try_first_pass. This way when you log into your SSH server using the public key, you’ll need to click the “Approve” button in the Duo app. Otherwise, you’ll still need to provide the account password.

    The Duo documentation has an excellent flow diagram for the authentication process using Duo, reproduced below.

    The steps are1:

    1. SSH connection initiated.
    2. Primary authentication: username/password or private key.
    3. Duo Unix connection established to Duo Security over TCP port 443.
    4. Secondary authentication via Duo Security’s service.
    5. Duo Unix receives authentication response.
    6. SSH session logged in.

    Comparison of Duo and TOTP

    So how does Duo compare to the Time-based one-time password (TOTP) solution we configured before?

    • TOTP uses a secret to generate time-based passwords. The “server” and the “client” run the same algorithm and generate the same password based on the common secret and the time.
    • The “server” and the “client” processes run independently and do not require internet.
    • Duo PAM module sends an authentication request to a Duo server and, therefore, requires an internet connection. You have a choice to make where allow or deny authentication if the Duo server is unavailable. You also will have to allow each server that uses Duo to have outbound connection to the internet.
    • Because Duo is a centrally managed platform, unlike TOTP, auditing login attempts. Unless you set up a syslog server, there’s no central way to manage TOTP login attempts.
    • Unless you’re using a centralized authentication server, such as RADIUS or AD, there’s no easy way to lock a user out when using TOTP. You’d have to touch every server where the user is configured. With Duo, you can disable the user and they’ll fail the second authentication factor (Duo) and you’ll be able to monitor those failures.

    Overall, I think Duo is an excellent solution for centralized MFA. I prefer it for the ease of set up, large number of integrations, and simplicity of centralized audit and management. But for the servers with no internet access, I’d fall back to TOTP, if I need the second factor.

    N.B.

    Our set up enables 2FA for remote SSH login ONLY. It does not provide any additional factors of authentication for the local login. Duo PAM can be configured to provide 2FA for local access and even require a push notification approval for sudo.

    So why did I choose not to protect the local logins with the second factor? Security always needs to be weighted against convenience of use. In my use case, there’s no expectation of physical shared access to the servers. And if an adversary has physical access to the machine, they can always reboot it, get into the recovery menu, change the root password, and do other nasty things. Unless we put additional measures to stop that, which will put additional barriers to the normal use. Again, we’re balancing security with convenience. In the case where there’s no physical access to the machines, I think securing just the remote SSH access with 2FA is perfectly reasonable. Your mileage may vary…

    1. https://duo.com/docs/duounix ↩︎
  • IPsec between Sophos and pfSense.

    One of the vendors that I’ve seen in small and medium businesses was Sophos. So, we can conclude that Sophos is Enterprise. Lucky for us, we can run the Sophos Firewall at home, thanks to their Sophos Home Edition Firewall. “Enterprise. At home”, remember? It is the same firewall you’d get from Sophos but with a few limitations1:

    • CPU is limited to 4 cores.
    • RAM is limited 6 GB.

    The machine can have more resources than that but Sophos will use up to that limit. I think it’s fair enough. I’ll be virtualizing our Sophos Home Edition Firewall and will provide the resources it needs. Note, the installer requires 4 GB of RAM minimum and 32 GB of disk. It complains otherwise. Also, by default, Sophos’s Port 1 is LAN and Port 2 is WAN. Your first vNIC will be LAN and your second vNIC will be WAN. Keep this in mind when you connect your topology. By default, Port 1 is running a DHCP server with 172.16.16.0/24 subnet, and the default address for the firewall configuration is 172.16.16.16:4444.

    Here’s the topology I’m using for this demonstration.

    • All subnets are /24 for simplicity, unless noted otherwise.
    • R0 is a pfSense instance that simulates WAN. It is connected to the internet. It runs DHCP servers for the downstream routers. The subnets are 10.0.111.0/24 on the left and 10.0.222.0/24 on the right.
    • PFSENSE and SOPHOS are the firewalls/routers that we’ll be connecting via site-to-site IPsec tunnel.
    • PC1 and PC2 are our end devices. We’ll use PC1 to manage PFSENSE and PC2 to manage SOPHOS.

    Sophos

    I’m not going to bore you with the initial Sophos setup. I am confident that you can run through the wizard.

    Sophos is a Zone based firewall. It is pretty intuitive and user-friendly but I did encounter a couple of “gotchas” setting IPsec up. And the first one is that we need to enable IPsec access on the WAN zone. On the right hand side pane, click on “Administration” under the “System” heading. Then go to “Device access”, check the IPSec for WAN zone and click “Apply”.

    Another gotcha is that there seems to be some “miscommunication” between pfSense and Sophos when they negotiate IPsec Phase 1. If pfSense has pretty restrictive settings (I use the same settings as in my Meraki example) and Sophos has multiple settings (even if there would be a matching combination) and if Sophos is the initiator, pfSense and Sophos cannot agree on the communication parameters and can’t establish a tunnel. If pfSense is the initiator, the tunnel is established. But to work around it, I’d recommend creating an IPsec profile that matches the pfSense parameters exactly. This way there’s no “miscommunication”.

    Go to “Profiles” under “System” on the right. Then to “IPsec profiles” and click “Add”.

    There create a new IPsec profile. I used the following settings:

    • “Name”: “pfsense_test”.
    • “Key exchange”: “IKEv2”.
    • “Authentication mode”: “Main mode”.
    • Phase 1 “Key life”: “28800:
    • Phase 1 “DH group (key group)”: “21 (ecp521)”.
    • Phase 1 “Encryption”: “AES256”.
    • Phase 1 “Authentication”: “SHA2 256”.
    • Phase 2 “DH group (key group)”: “21 (ecp521)”.
    • Phase 2 “Key life”: “3600”.
    • Phase 2 “Encryption”: “AES256”.
    • Phase 2 “Authentication”: “SHA2 256”.

    Now let’s go to “Site-to-site VPN” under the “Configure” section on the left and create a new IPsec tunnel. I used the following settings:

    • “Name”: “pfsense_test”.
    • “IP version”: “IPv4”.
    • “Connection type”: “Policy-based”.
    • “Profile” under “Encryption”: “pfsense_test”. This is the profile we created in the steps above.
    • “Authentication type”: “Preshared key”.
    • “Preshared key”: your super secret password.
    • “Local gateway” is the section where we configure the local side of the VPN tunnel. “Listening interface”: “Port 2 – 10.0.222.251”. This is your WAN interface on Sophos with its “public” WAN IP. Since we’re using it in a lab the IP in 10.0.222.0/24 range, R0’s R0R2 interface on the right side of the topology.
    • “Local ID type”: “IP address” and “Local ID”: “10.222.251”. This can be arbitrary but we’ll use these settings to identify the IPsec peers, so make it make sense to you. I just use the WAN IP address.
    • “Local subnet”: create a new subnet as in Figure 8. We’ll be advertising the local 172.16.16.0/24 subnet to the remote peer. But you can create as many entries there as needed.
    • “Remote gateway” is the pfSense side of the tunnel. In the “Gateway address” we put the public IP address of the pfSense (FQDN works too), in our case it’s 10.0.111.111. Again, refer to the topology above.
    • “Remote ID type”: “IP address”.
    • “Remote ID”: “10.0.111.111”.
    • Create a “Remote subnet” as in Figure 9. The subnet we use is 10.111.111.0/24, the LAN subnet off pfSense’s R0R1 interface.

    After saving, we can enable and activate our tunnel. Did you think we were done? No way! We need to create the firewall rules to allow the traffic between the subnets across the tunnel. Head to “Rules and policies” under the “Protect” section. There we’ll create 2 firewall rules: allowing LAN to VPN and allowing VPN to LAN. Essentially, you’ll select “VPN” as the source zone and “LAN” as destination to allow traffic from the VPN tunnel to the LAN. Make sure to select “Accept” as the “Action”. Switch “LAN” and “VPN” around to allow LAN traffic through the tunnel.

    OK. We’re done with Sophos. I promise. But we’re only half way done with the tunnel. Time to configure the pfSense side.

    pfSense

    The pfSense configuration will be very similar to what we did when we setup a tunnel with Meraki. So I may omit some details.

    As always we start with creating Phase 1 and Phase 2 settings for the IPsec tunnel. Head to VPN -> IPsec. Here are the settings I used:

    • “Description”: “sophos”.
    • “Key Exchange version”: “IKEv2”.
    • “Internet Protocol”: “IPv4”.
    • “Interface”: “WAN”. This is the interface that is connected to “internet”.
    • “Remote Gateway”: “10.0.222.251”, the “public” IP address of Sophos. FQDN also works.
    • For “Phase 1 Proposal (Authentication)”, “Authentication Method”: “Mutual PSK”.
    • “My Identifier”: “IP address” and “10.0.111.111”. Make sure this matches the “Remote ID” set up in the Sophos IPsec configuration.
    • “Peer Identifier”: “IP address” and “10.0.222.251”. Make sure this matches the “Local ID” set up in the Sophos IPsec configuration.
    • “Pre-Shared Key”: use your super secret password that you put into the Sophos configuration. Do NOT use “test123”, it’s bad!
    • For “Phase 1 Proposal (Encryption Algorithm)”, “Encryption Algorithm”: “AES” for “Algorithm”, “256 bits” for “Key length”, “SHA256” for “Hash”, and “21 (nist eco 521)” for “DH Group”.
    • Put “Life Time”: “28800”.

    For Phase 2, I used the following settings:

    • “Description”: “sophos”.
    • “Mode”: “Tunnel IPv4”.
    • “Local Network”: “R0R1 subnet”. Make sure that the subnet on the pfSense is the same as configured in the Sophos configuration for “Remote subnet”.
    • “Remote network”: “172.16.16.0/24”, the same network as “Local subnet” in the Sophos IPsec configuration.
    • For the “Phase 2 Proposal (SA/Key Exchange)”: “Protocol”: “ESP”. This will ensure that the traffic is encrypted.
    • “Encryption Algorithms”: “AES”, “256 bits”.
    • “Hash Algorithms”: “SHA256”.
    • “PFS key group”: “21 (nist ecp521)”.
    • “Life Time”: “3600”.

    Now, we need to configure the inbound firewall rules. Head to Firewall -> Rules -> WAN. Create two rules: one allowing UDP from “10.0.222.251” (Sophos “public” IP) to “WAN address”, port “500 (ISAKMP)”, and one for ESP from “10.0.222.251” to “WAN address”. If you’re behind a NAT, then you’ll need to create a rule allowing UDP port 4500 (NAT-T). Make sure you create appropriate rules to allow traffic between the subnets behind the VPN. The interface group is called “IPsec” in the Firewall -> Rules section.

    After all of that we can see that our tunnel is established. Head to Status -> IPsec to confirm.

    We can also confirm that the tunnel allows traffic by pinging from PC1 to PC2 and vice-versa.

    And this is it. Just like with the Meraki security appliance, the set up is simple just tedious. There were a couple of gotchas: allow VPN access on WAN and create an IPsec profile. But that’s the price of running enterprise at home…

    1. https://www.sophos.com/en-us/free-tools/sophos-xg-firewall-home-edition ↩︎
  • Meraki Dashboard APIs, Part 2.

    Using Meraki Dashboard APIs for Local DNS

    Problem

    The goal in IPsec between Meraki and pfSense was to enable access from a LAN behind the Meraki Z4 to access an internal https server in a LAN behind the pfSense. IPsec allows that. But. What does an https server use to provide the “s” to “http”? That’s right, a TLS certificate. If your client validates the certificates, when you access an https resource that uses a self signed certificate or a certificate issued by an authority not in the client’s chain of trust, you’ll get a warning. That warning is annoying and, depending on the client, can even prevent further access to the server.

    Solution

    So we have 2 solutions:

    • Add the signing certificate authority to the client’s chain of trust. This solves the warning. But not all clients support that (I am looking at you, IoT). And you’d need to do that for every device that will be accessing the server.
    • Use a certificate signed by an authority that the devices already trust. Perfect! I use Let’s Encrypt and it is trusted by most devices.

    Let’s Encrypt validates domain ownership to issue a certificate. It needs to be a publicly accessible domain (so no .local or anything like that). And, lucky, for us, we do have a public domain set up for our DynDNS access. Great! (Maybe some time in the future I’ll write about Let’s Encrypt, but it’s really not complicated. We’ll see.)

    Problem. Part 2

    TLS encryption works by the client submitting a Server Name Indication (SNI) request to the server. The SNI is the domain name that the client is trying to access. It is important because the https server provides a response and the certificate based on the SNI. Even though the underlying communication happens on the IP layer, the client must access the https server by the domain name. If the client accesses the https sever by the server’s IP address, the client will get a warning and we’ll be back where we started.

    Solution. Part 2

    So the solution is obvious. Create an A (or AAAA, if you’re fancy) DNS record for your domain that will point to the https server. The DNS record can point to a private IP address, no problem. But how do you do it with a Meraki Security Appliance (or a Teleworker Gateway)? We have a few options:

    • Run our own DNS server, like Pi-hole. I love Pi-hole and run it on the pfSense LAN. But I’d need an additional device on the Z4 LAN just to serve a single DNS record. I’m not a fan of this option.
    • Create a DNS record in a public DNS resolver. Since I do have a domain with Cloudflare, I can easily create a DNS record with them. This way a Cloudflare’s server will be queried to resolve the domain. I’m not a huge fan of this option either. This way we’re advertising our internal server’s IP address to the entire internet. Yes, the IP is internal and is not globally routable, but still…

    Luckily, Meraki provides just the solution for this problem in the form of Local DNS records. What it does is you can create a DNS record and an MX (or a Z) will respond to the query with the IP address configured. As of September 2025, this function is not available through the Dashboard and must be configured through the Dashboard APIs. There are additional requirements for this to work1:

    • MX (Z) running firmware 19.1+
    • DNS namservers setting under DHCP Settings must be configured as “Proxy to Upstream DNS”
    • The Network must not be a part of a Template.

    So, let’s get to it.

    Local DNS with Meraki Dashboard APIs script

    Now, we’re finally got to the core of the post. Let’s set up the script and the functions for us to execute the API calls.

    On a high level, to set up Local DNS records we need:

    1. Create an organization’s Local DNS Profile
    2. Create a Local DNS record in the profile we created
    3. Assign the profile to a network

    We’ll expand the code we started in Part 1. We’ll need a name for a profile we’ll be creating. For simplicity, I use the same name as the network we’ll be assigning the profile to. We’ll need a fully qualified domain name and the IP address that we’ll be using for the record.

    import requests
    import json
    
    API_KEY = "<API_KEY_HERE>"
    Organization = "<ORGANIZAION_NAME_HERE>"
    Network = "<NETWORK_NAME_HERE>"
    
    URL = "https://api.meraki.com/api/v1"
    
    Profile = Network
    Domain = "<fullyqualified.domain[.]tld>"
    IP_addr = "<192.168.12.34>"

    Now, let’s create our functions.

    First, we need to get our Organization ID. We could get it from the Dashboard, but it’s easier to just make an API call. The function needs the Organization’s name. It’ll then query the API using the API key and will return the matching Organization ID, or nothing (None) if an organization with such name is not found. So, pay attention to the organization’s name.

    #Fuction to get Organization ID
    def getOrganizations(target_name):
        payload = None
        headers = {
            "Authorization": f"Bearer {API_KEY}",
            "Accept": "application/json"
        }
        response = requests.request('GET', f"{URL}/organizations/", headers=headers, data = payload)
        json_str = response.content.decode('utf-8')
        data = json.loads(json_str)
        for organization in data:
            if organization.get("name") == target_name:
                return organization.get("id")
        return None

    Next, we need a function to get the Network ID. Similar to the function above:

    #Function to get Network ID 
    def getOrganizationNetworks(target_name):
        payload = None
        headers = {
            "Authorization": f"Bearer {API_KEY}",
            "Accept": "application/json"
        }
        response = requests.request('GET', f"{URL}/organizations/{Org_ID}/networks", headers=headers, data = payload)
        json_str = response.content.decode('utf-8')
        data = json.loads(json_str)
        for network in data:
            if network.get("name") == target_name:
                return network.get("id")
        return None 

    Next, we create an organization Local DNS profile. It needs the Organization ID and the name for the profile (we use the same name as the network).

    #Function to Create an Organization Local DNS Profile
    def createOrganizationApplianceDnsLocalProfile(Org_ID, prof_name):
        payload = f'''{{ "name": "{prof_name}" }}'''
        headers = {
            "Authorization": f"Bearer {API_KEY}",
            "Content-Type": "application/json",
            "Accept": "application/json"
        }
        response = requests.request('POST', f"{URL}/organizations/{Org_ID}/appliance/dns/local/profiles", headers=headers, data = payload)
        json_str = response.content.decode('utf-8')
        data = json.loads(json_str)
        return data

    After that, we need a function to get the Profile ID for the profile we just created. Technically, you can get the ID at the time of the profile creation, but you’d need this function if you ever add records to an already existing profile.

    #Function to get Profile ID for a particular Profile
    def getOrganizationApplianceDnsLocalProfilesSpecific(ProfName):
        payload = None
        headers = {
            "Authorization": f"Bearer {API_KEY}",
            "Accept": "application/json"
        }
        response = requests.request('GET', f"{URL}/organizations/{Org_ID}/appliance/dns/local/profiles", headers=headers, data = payload)
        json_str = response.content.decode('utf-8')
        data = json.loads(json_str)
        for item in data.get('items', []):
            if item.get('name') == ProfName:
                return item.get('profileId')
        return None

    Now, we need a function to create the Local DNS record. It needs the Organization ID, Profile ID, FQDN, and the IP address.

    #Function to Create a DNS Local Record
    def createOrganizationApplianceDnsLocalRecord(Org_ID,ProfID,Dom,IP):
        payload = f'''{{
        "hostname": "{Dom}",
        "address": "{IP}",
        "profile": {{ "id": "{ProfID}" }}
        }}'''
        headers = {
            "Authorization": f"Bearer {API_KEY}",
            "Content-Type": "application/json",
            "Accept": "application/json"
        }
        response = requests.request('POST', f"{URL}/organizations/{Org_ID}/appliance/dns/local/records", headers=headers, data = payload)
        json_str = response.content.decode('utf-8')
        data = json.loads(json_str)
        return data

    Finally, we need a function to assign the profile to the network. This what enables the security appliance (or the teleworker gateway) to actually resolve the domain name into the IP address for its clients. It requires the Organization ID, Network ID, and the Profile ID.

    #Fuction to assign the DNS Records Profile to a Network
    def bulkOrganizationApplianceDnsLocalProfilesAssignmentsCreate(Org_ID,Net,Prof):
        payload = f'''{{
        "items": [
            {{
                "network": {{ "id": "{Net}" }},
                "profile": {{ "id": "{Prof}" }}
            }}
        ]
        }}'''
        headers = {
            "Authorization": f"Bearer {API_KEY}",
            "Content-Type": "application/json",
            "Accept": "application/json"
        }
        response = requests.request('POST', f"{URL}/organizations/{Org_ID}/appliance/dns/local/profiles/assignments/bulkCreate", headers=headers, data = payload)
        json_str = response.content.decode('utf-8')
        data = json.loads(json_str)
        return data

    After defining the functions, we need to call them in the following order to create the Local DNS record in our network.

    Org_ID = getOrganizations(Organization)
    Netw = getOrganizationNetworks(Network)
    createOrganizationApplianceDnsLocalProfile(Org_ID,Profile)
    ProfNumb = getOrganizationApplianceDnsLocalProfilesSpecific(Profile)
    createOrganizationApplianceDnsLocalRecord(Org_ID,ProfNumb,Domain,IP_addr)
    bulkOrganizationApplianceDnsLocalProfilesAssignmentsCreate(Org_ID,Netw,ProfNumb)

    This should be it. To validate that the record is created you’ll need one more function.

    #Function to get Local DNS Records
    def getOrganizationApplianceDnsLocalRecords(Org_ID):
        payload = None
        headers = {
            "Authorization": f"Bearer {API_KEY}",
            "Accept": "application/json"
        }
        response = requests.request('GET', f"{URL}/organizations/{Org_ID}/appliance/dns/local/records", headers=headers, data = payload)
        json_str = response.content.decode('utf-8')
        data = json.loads(json_str)
        return data

    After all of this, you should have this beautiful banner at the top of your Appliance Status page indicating that this appliance has the local DNS record. And the devices on my Z4’s LAN successfully resolve the domain name into the IP address across the IPsec tunnel. And no annoying certificate warnings!

    Providing additional functionality through APIs only instead of the Dashboard is, in my opinion, a bit antithetical to the Meraki’s idea of simplicity. But, perhaps, the Local DNS is a too niche of a requirement. Anyways, beggars can’t be choosers and I pick having the option, even through the APIs, over not having the option at all. And APIs are totally Enterprise. So, I am not complaining…

    1. https://documentation.meraki.com/MX/Local_DNS_Service_on_MX ↩︎
  • Meraki Dashboard APIs.

    Expanding Meraki capabilities with the help of APIs.

    Some of you noticed a banner on top of the Appliance Status page in the IPsec between Meraki and pfSense post. The banner read “Local DNS has been enabled via API on this network. For more info see documentation“. This is a post about Meraki Dashboard APIs and how they expand what the platform can do.

    Meraki Dashboard APIs

    Meraki utilizes RESTful APIs to expand the device capabilities and allow for automation of networking tasks. For example, you can create provisioning scripts, or scripts to read the logs. This is done, in part, to get around some of the limitation of the GUI Meraki Dashboard. Meraki has a pretty good documentation in general, and for APIs in particular. You can find it here.

    However, while the documentation describes the operation of each API call, request and response schemas, and even provides code snippets, it does not go in detail over the order of operations to accomplish tasks. For example, in order to enable VLANs in a security appliance, one needs to go to Security Appliance & SD-WAN -> Addressing & VLANs. There click on “VLANs” and then create a VLAN. What seems like one operation in the Dashboard is actually accomplished by two API calls. The first one is to enable VLANs with the “updateNetworkApplianceVlansSettings” operation ID and then create a new VLAN. It makes sense but can be confusing when doing it the first time.

    For most API calls you need either an Organization ID or a Network ID. The Organization ID can be found at the bottom of every page of the Meraki Dashboard. Finding the Network ID is not trivial and the easiest way for me is to run an API call with “getOrganizationNetworks” operation ID.

    Setting up API calls

    But before you can even run an API call, you’ll need the API key. To obtain the API key, in the Meraki Dashboard, go to “My Profile” (top right hand corner), scroll about half way down and click “Generate new API key” under the “API access” section. Save this key in a safe place. Protect this key as it allows for unrestricted (up to the permission level of the account for which the API key has been created) modification access to your organization. Do NOT share this key. It is bad!

    There are many methods of making API calls. You can get as low level as cURL, use Python, use Meraki Python Library, use Postman, use Ansible or Terraform. I use Python with requests and json modules. “Why not use Meraki Python Library?”, you ask. That’s a fair question. I, to a certain degree, subscribe to the idea of “living off the land“. If I don’t have to install an additional package, I prefer not to. Also, I prefer to understand what’s going on and feel the Meraki Python Library abstracts to much away. There’s nothing wrong with using Meraki Python Library, I just prefer not to.

    “So, why not cURL, then?”. Again, fair point. I use cURL for API calls in my DynDNS script, and use awk to parse the response. However, it gets messy pretty quick and we need to have a more robust solution. Thus we’ll be using json module for parsing the API responses and requests to make the API calls. So, let’s start building our API Python script.

    Below is the beginning of the script. We import requests and json modules. We add our API key. Again, make sure you protect it. Do NOT share your script with the API key in it. It is bad! You can store it in a different file and then read it from there or have it entered interactively. We input our organization name. This is done if you manage multiple organizations and need to select which one you’ll be working on. And we specify the name of the network we’ll be operating on. We’ll use this name to pull the Network ID that we’ll use for most of the calls. And we enter the base URL for the API calls. This base will be appended to to performa various functions.

    import requests
    import json
    
    API_KEY = "<API_KEY_HERE>"
    Organization = "<ORGANIZAION_NAME_HERE>"
    Network = "<NETWORK_NAME_HERE>"
    
    URL = "https://api.meraki.com/api/v1"

    I think I’ll stop here for now. Next time we’ll explore the Local DNS feature. Why I use it and how to set it up. APIs are very enterprise. So we’re getting even closer to living “Enterprise. At home”…

  • DHCP Option 121. Friend or Foe?

    Back in May 2024, there were reports of a VPN “vulnerability” named “TunnelVision”. The “vulnerability” uses DHCP Option 121, RFC 3442, to advertise a static route to DHCP clients. The clients then use this route as it is more specific than 0.0.0.0/0 typically defined for a non-split VPN tunnel and send the traffic elsewhere. The authors of the RFC acknowledged it in December of 2002, the RFC publishing date. It is surprising it took almost 22 years for this to make news. And while DHCP Option 121 is a pretty obscure and not widely used option, it can act as a very convenient routing helper.

    Scenario

    Suppose I have VLAN 12, 192.168.12.0/24 and VLAN 34, 192.168.34.0/24. Utilizing a router-on-a-stick topology, the default gateways for each subnet are X.X.X.1 (the interface IPs for each VLAN). The router is connected to a core which also has the access switches connected to it. Majority of the client traffic is going out to the internet utilizing the router as their default gateway.

    Challenge

    But what do we do for the inter-VLAN routing? Usually, we have 2 options:

    • have the router handle it but this adds an extra forward from the switch to the router and back and thus latency and additional load on the router.
    • if the switch supports Layer 3 routing, do it on the switch acting as the default gateway for the VLANs. But this adds an extra hop for the internet traffic.

    Solution

    A better solution is DHCP option 121. This option allows for keeping the router as the default gateway but adds a static route between VLAN 12 and VLAN 34 with the switch acting as the next hop.

    Configure your Layer 3 switch as you normally would. For example, for Cisco:
    conf t
    ip routing
    vlan 12
    vlan 34
    interface Vlan12
    ip address 192.168.12.111 255.255.255.0
    no shutdown
    interface Vlan34
    ip address 192.168.34.111 255.255.255.0
    no shutdown
    end

    In your DHCP server, add Option 121 (usually it’s a custom option). This option requires a specific encoding for the routers. You can use a calculator like this one to generate the string to add to the DHCP server.
    The sting for VLAN12 in our example would be 0x00C0A80C0118C0A822C0A80C6F and 0x00C0A8220118C0A80CC0A8226F for VLAN34.

    Simple as that. This way:

    • All internet-bound traffic travels the usual path, with no extra hops.
    • All inter-VLAN traffic between VLAN 12 and VLAN 34 is handled by the Layer 3 switch.
    • Improved routing and switching efficiency and (depending on the load) freeing up the router resources.

    Discussion

    One of the questions left unanswered is why bother with Option 121 when you can set the route directly on the machine? And the answer is iOS (Apple’s not Cisco’s) and other mobile devices.

    In my environment, I have a bunch of iPhones and iPads in VLAN 12 that need to access a WebDAV file sever (running over https and accessed by its fqdn) residing in VLAN 34. This is the traffic I want to offload off the router. I could create another vNIC on the WebDAV server and place it into VLAN 12, making the server effectively local for the iPhones and iPads. But then I’d need to set up a split brain DNS to return different IP addresses based on the VLAN.

    DHCP Option 121 provides a perfect solution to my challenge. It is more flexible than the deprecated(?) Option 33. I use /32 subnet mask for the route towards VLAN 34 as I need access to only one server. But for the return traffic I need /24 so Option 33 wouldn’t be an option (pun intended).

    This is it for now. Now we’re using enterprise solutions as Layer 3 routing and custom DHCP options in our homelab. Enterprise at home, indeed…

  • IPsec between Meraki and pfSense.

    Over the weekend, I visited my other location and set a Meraki Z4 up. This also allowed me an opportunity to configure a Site-to-Site IPsec VPN tunnel between this location and my main location that’s running a pfSense appliance. This tunnel will be used for off-site backup transfers.

    The process of setting up the site-to-site IPsec tunnel is fairly straightforward. I think it took me five times as long to write this post than to actually get it running. We’re going to start with the Meraki side and then will shift gears to pfSense. There will be information that we need to share between the two devices during the set up. So, let’s get going.

    Meraki

    First, go to your Meraki Dashboard network and click Security Appliance (or, in my case, Teleworker gateway) -> Configure -> Site-to-site VPN.

    There, scroll about half way, and, under the “IPsec VPN peers” section, click “+ Add a peer”.

    In the new opened pane:

    • Type in a “Name”. This name will be used to identify the peer, so choose the name that would make sense to you.
    • Select “IKE version” IKEv2 (it allows a bit more options compared to IKEv1).
    • In the “Public IP or Hostname” use the IP or the hostname of your remote site. In my case, I am using the fully qualified domain name of my pfSense site. Note, that FQDN peering requires MX (or Z) appliance firmware of 18.1 or higher.
    • Type in a “Local ID”. I prefer to use a private IP that is not used in my environment.
    • Type in a “Remote ID”. I use the same approach as with the Local ID.
    • Next, type in “Shared secret”. This secret will also be used on our pfSense appliance, so make sure you copy it somewhere. Use a strong randomly generated key. You won’t need to type it in so the more complex the better.

    Scrolling down, select:

    • “Static” under “Routing”.
    • In the “Private subnets”, include the subnets the remote peer (pfSense, in my case) will be advertising to this location.
    • “Availability” – select the network this IPsec peer should be peering with.
    • Select options for “Tunnel monitoring”. I am not using anything at the moment, but it does require a Health check option to be set up.
    • Keep “Failover directly to internet disabled”. We’re only tunneling local subnets so this option is irrelevant to us.

    Now, in the “IPsec policy”, select “Preset” “Secure”. It has the highest settings that the Meraki security appliance currently supports and it makes setup easier. The settings for Phase 1 and Phase 2 are below. Take a note, as we’ll need to use the same setting on our remote pfSense peer. When done click “Save” and then “Save Changes” button at the bottom of the page.

    At the bottom of the Site-to-site VPN page you can set the Site-to-site outbound firewall rules. These rules apply to traffic going across the IPsec tunnel. By default, Meraki uses Allow Any. I prefer to allow only what I need and also Deny Any right above the Allow Any “Default rule”.

    We’re pretty much done with the Meraki side. But, since this tunnel is between two residential locations, we have a dynamic public DNS. Previously, I described how I manage the dynamic public IPs with DynDNS. Meraki makes it super easy as it manages the DynDNS records by default. Go to Security appliance (Teleworker gateway) -> Appliance status. There copy the “Hostname” (ending with .dynamic-m[.]com). We’ll use it when we set up pfSense.

    pfSense

    By default, Meraki denies any incoming traffic. When you create an IPsec tunnel, it allows it through the firewall automatically. pfSense does the same. I’m not a dan of it as I’d like more control over my firewall rules. So the first thing we’ll do is go to System -> Advanced -> Firewall & NAT. About half way through you’ll see the “Advanced Options” section. In this section check “Disable all auto-added VPN rule” and click “Save” at the very bottom of the page.

    Now go to VPN -> IPsec. There click the “+ Add P1” green button. There:

    • Enter the “Description”. This is only for administrative purposes so choose something that would make sense to you.
    • Select “Key Exchange version” “IKEv2”.
    • In “Interface”, select the interface that is connected to the internet. It is “GATEWAY” in my example.
    • In the “Remote Gateway” type in the fully qualified domain name for your Meraki appliance. We copied it from the Appliance status page, it ends with .dynamic-m[.]com.
    • For the “Phase 1 Proposal (Authentication)” select “Mutual PSK” for the “Authentication Method”.
    • For “My identifier” and “Peer identifier” use the same IP addresses you used on the Meraki side (the Meraki identifier is the “Peer identifier” in this case).
    • Paste the super secret and complex key you used on the Meraki appliance into the “Pre-Shared Key” field.
    • For the “Phase 1 Proposal (Encryption Algorithm), we need to use the same settings as on our Meraki appliance.
    • Use “AES” for “Algorithm”.
    • Use “256 bits” for “Key length”.
    • Use “SHA256” for “Hash”.
    • Use “21 (nist ecp521)” for “DH group”.
    • Select “Life Time” of “28800”.

    In the “Advanced Options” select “None (Responder Only)” for “Child SA Start Action”. This forces the Meraki device to initiate the tunnel. You can leave everything else at default. Click the blue “

    Now, in the “VPN / IPsec / Tunnels” expand the item we’ve just created by clicking on the “Show Phase 2 Entries” button and then click the green “+ Add P2” button. There:

    • Enter a description.
    • Select “Tunnel IPv4” for the “Mode”. This is the mode we’ll use for the static routing through our IPsec tunnel.
    • Under “Networks” subsection, for “Local Network” select “Network” for the “Type” and enter the IP sunset that the pfSense appliance will be sharing through the tunnel.
    • Do the same thing for the “Remote Network” but enter the IP subnet that the Meraki device will be sharing.

    In the “Phase 2 Proposal (SA/Key Exchange)”, enter:

    • “ESP” for “Protocol”. This is what enables encryption of the IPsec tunnel.
    • “AES” and “256 bits” for “Encryption Algorithms”.
    • “SHA256” for “Hash Algorithms”.
    • “21 (nist ecp521)” for “PFS key group”.
    • Under the “Expiration and Replacement” select “14400” for “Life Time”. And then click the blue “Save” button.

    OK. We’re done with the tunnel set up. It should look something like this:

    But our VPN tunnel will not be established, because we disable all auto-added VPN rules. So let’s change that. Go to Firewall -> Aliases -> IP and click the green “+ Add” button. Here we’ll create an alias for our Meraki device. Aliases in pfSense act sort of like groups. We can put multiple items into an alias and then create a firewall rule using it. If we need to make any changes, we can edit alias and those changes will propagate into the firewall rules. For this alias:

    • Create a name following the requirements. We will use this name when we create our firewall rules.
    • Enter a description.
    • For “Type” select “Host(s)”.
    • For “IP or FQDN” enter the fully qualified domain name for the Meraki appliance that we copied from Appliance status page from the Meraki Dashboard.
    • Type in a description that will make sense to you if you need to figure out what you did six months down the road.
    • Click the blue “Save” button.

    Now, go to Firewall -> Rules -> GATEWAY (or whatever your internet connected interface is called). There create 2 rules: one for ISAKMP and another for NAT-T. The screenshots are below, but:

    • Select “Action” “Pass”.
    • For “Interface” select “GATEWAY” (your internet connected interface).
    • “Address Family” “IPv4”.
    • “Protocol” “UDP” (as IPsec uses UDP for ISAKMP and NAT-T).
    • Select “Address or Alias” under “Source”. Start typing the alias name we created and it should auto-populate with the name.
    • In the “Destination” select “GATEWAY address”, where “GATEWAY” is your internet connected interface.
    • Click the blue “Display Advanced” button in the “Destination” section.
    • Select “ISAKMP (500)” for both “From” and “To” port ranges under “Destination Port Range”
    • Enter a description and click “Save”.
    • Repeat, but select “IPsec NAT-T (4500)” for the “Destination Port Range” to create a rule allowing NAT-T.
    • Save and click “Apply Changes”.

    Alright. Our tunnel should form now. But no traffic will flow through. It is because, unlike Meraki, pfSense denies the IPsec tunnel traffic by default. Let’s create rule allowing in. Go to Firewall -> Rules -> IPsec and create a new rule as follows:

    • “Action” – “Pass”.
    • “Interface” – “IPsec”.
    • “Address Family” – “IPv4”.
    • “Protocol” – “Any”.
    • “Source” – Meraki IP subnet, in our example it is “10.16.23.0/24”.
    • “Destination” – the subnet on pfSense that we’re setting up tunnel for, in this example “172.16.23.0/24”.
    • Enter the description, save, and apply changes.

    Now we should be good. On pfSense go to Status -> IPsec to confirm that the tunnel is up and running. To do the same in the Meraki Dashboard, go to Security Appliance (Teleworker gateway) -> VPN Status -> 1 IPsec peer. It should have a green circle next to the tunnel name.

    This is it. Not too complicated. Just need to make sure that the parameter match between the peers, that proper IP subnets are advertised, and that the firewall rules allow communication. If the tunnel is not being established, take packet captures on the WAN interfaces of the Meraki and the pfSense appliances. Look for unidirectional traffic and troubleshoot from there. But that is a different post…

  • Firewalls and DDNS, Part 2.

    As discussed in the Firewalls and DDNS post, DuckDNS can sometime be slow. Additionally, some corporate DNS resolves do not resolve the DuckDNS domains making them inaccessible. To solve this, I decided to use Cloudflare’s name servers to handle my domains. Contrary to the spirit of free open source software, when it comes to domain registration, it is not free. You may be able to find deals on domain name registration, but they’re usually promotional and will eventually make you pay. I went with Cloudflare and their domain registration services for simplicity and a single control plane since I decided I’d be using their DNS.

    Creating a DDNS automation script

    Create a folder in your home folder names something like .ddns, like so mkdir -p ~/.ddns. Create and edit a file called variables in this .ddns folder with your favorite text editor, I prefer nano, nano ~/.ddns/variables.

    In the variables add your (you guessed it) variables you got from Cloudflare.

    SUBDOMAIN="<subdomain.fdqn.tld>"
    KEY="<key>"
    ZONE_ID="<zone_id>"

    You need to protect this file. This file contains information that will allow changing your DNS records in your Cloudflare account.

    chmod 500 ~/.ddns
    cd ~/.ddns
    chmod 400 ~/.ddns/variables
    chmod 500 ~/.ddns/ddns.sh
    #!/bin/bash
    
    # This script updates the Cloudflare's A DNS records for your domain.
    # It requires the DNS edit token key and the DNS Zone ID.
    # The script uses curl for the API calls and awk parse the responses.
    # Be sure to create the subdomain A DNS record in the Cloudflare portal with a dummy routable IP address first.
    # Create a file in the same folder where you place this script and name the file variables
    # Put these lines in to the file variables
    #SUBDOMAIN="<subdomain.fdqn.tld>"
    #KEY="<key>"
    #ZONE_ID="<zone_id>"
    # Make sure the user who will be executing this script can read the file
    
    # Function to validate IP address
    validate_ip() {
        local ip=$1
    
        # Regular expression to match valid IPv4 addresses
        local valid_ip_regex="^((25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.){3}(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])$"
    
        # Regular expressions to exclude non-routable IP addresses
        local non_routable_regexes=(
            "^0\.([0-9]{1,3}\.){2}[0-9]{1,3}$"
            "^10\.([0-9]{1,3}\.){2}[0-9]{1,3}$"
            "^100\.(6[4-9]|7[0-9]|1[0-1][0-9]|12[0-7])\.([0-9]{1,3}\.)[0-9]{1,3}$"
            "^127\.([0-9]{1,3}\.){2}[0-9]{1,3}$"
            "^169\.254\.([0-9]{1,3}\.)[0-9]{1,3}$"
            "^172\.(1[6-9]|2[0-9]|3[0-1])\.([0-9]{1,3}\.)[0-9]{1,3}$"
            "^192\.0\.0\.([0-9]{1,3})$"
            "^192\.0\.2\.([0-9]{1,3})$"
            "^192\.88\.99\.([0-9]{1,3})$"
            "^192\.168\.([0-9]{1,3}\.)[0-9]{1,3}$"
            "^198\.(1[8-9])\.([0-9]{1,3}\.)[0-9]{1,3}$"
            "^198\.51\.100\.([0-9]{1,3})$"
            "^203\.0\.113\.([0-9]{1,3})$"
            "^224\.([0-9]{1,3}\.){2}[0-9]{1,3}$"
            "^(24[0-9]|25[0-5])\.([0-9]{1,3}\.){2}[0-9]{1,3}$"
        )
    
        # Check if the IP address matches the valid IP regex
        if [[ $ip =~ $valid_ip_regex ]]; then
            # Check if the IP address matches any of the non-routable IP regexes
            for regex in "${non_routable_regexes[@]}"; do
                if [[ $ip =~ $regex ]]; then
                    echo "Invalid IP address: Non-routable IP address"
                    return 1
                fi
            done
            echo "$ip is a valid IP address"
            return 0
        else
            echo "Invalid IP address: Does not match IPv4 format"
            return 1
        fi
    }
    
    # Make sure that the ddns.sh and variables files are in the same directory
    
    source ./variables
    
    # Get old IP address
    OLD_IP=$(host $SUBDOMAIN | awk '/has address/ { print $4 }')
    if validate_ip "$OLD_IP"; then
      echo "Old IP for $SUBDOMAIN is $OLD_IP"
    else
      echo "Failed getting old IP"
      exit 1
    fi
    
    # Get new IP address
    NEW_IP=$(curl -s --connect-timeout 20 https://checkip.amazonaws.com)
    if validate_ip "$NEW_IP"; then
      echo "New IP for $SUBDOMAIN is $NEW_IP"
    else
      echo "Failed getting new IP"
      exit 1
    fi
    
    # Compare the IP addresses
    if [ "$NEW_IP" == "$OLD_IP" ]; then
        echo "IP is already correct"
    else
    
    # Get Subdomain DNS Record from Cloudflare
    DNS_RECORD_ID=$(curl -s --connect-timeout 20 --request GET \
        --url https://api.cloudflare.com/client/v4/zones/$ZONE_ID/dns_records \
        --header 'Content-Type: application/json' \
        --header "Authorization: Bearer $KEY" \
            | awk -v RS='{"' -F: '/^id/ && /'"$SUBDOMAIN"'/{print $2}' | tr -d '"' | sed 's/,.*//')
    echo "Got Record ID"
    
    # Update IP Address
    curl -s -o /dev/null --connect-timeout 20 --request PUT \
      --url https://api.cloudflare.com/client/v4/zones/$ZONE_ID/dns_records/$DNS_RECORD_ID \
      --header 'Content-Type: application/json' \
      --header "Authorization: Bearer $KEY" \
      --data "{\"type\":\"A\",\"name\":\"$SUBDOMAIN\",\"content\":\"$NEW_IP\",\"ttl\":1,\"proxied\":false}" 
    echo "IP address for $SUBDOMAIN is updated to $NEW_IP"
    
    fi
    
    exit 0

    This script queries DNS record for your domain and compares the IP address with the one obtained from https://checkip.amazonaws.com. If they match, it does nothing. If they don’t match, the script updates the Cloudflare’s DNS record. The script uses curl for the API calls and awk to parse the responses.

    Now create a cron job (with crontab -e command) to run this script every <X> minutes (adjust to taste). Change <user> to the username where you stored the script.

    */<X> * * * * /home/<user>/.ddns/ddns.sh >/dev/null 2>&1

    Obtaining the DNS token

    To get the Zone ID and the API key, go to your Cloudflare portal, select the domain you need. In the overview scroll down and select “Click to copy” underneath the Zone ID field and paste it into the variables file for the ZONE_ID variable. Then click “Get your API token” to create your API token.

    Now, click on the blue “Create Token” button.

    Next, click the “Use template” button next to the “Edit zone DNS”.

    There edit the Token Name. It’ll be important to keep track of all your tokens if you create many. In the “Zone Resources” select “Specific zone” and select the domain you’re creating the token for. And then click “Continue to summary button”.

    Not much to do here but to click the “Create Token” button.

    This is where you finally get your token. Make sure you copy it into the variables file for the KEY variable as you won’t be able to view it again and will need to recreate it.

    This is it. This is the method I use to track my IP addresses for the devices with the dynamic public IPs. I then query the DNS record to get the IP and update the allow lists on my firewall (as described in Firewalls and DDNS). Of course, there’s a delay in the DNS record propagation, plus the frequency at which you run the script, so it may not be fully suitable for the mission critical applications. But if you’re running a mission critical application, you’re probably not relying on dynamic IPs and use a static IP anyways…

  • The Magic of Meraki.

    Recently, I was fortunate to spend a few months working for Cisco, doing tech support for their Meraki products. As an avid self-hoster, I was a bit apprehensive about the idea of a cloud managed networking platform. But boy was I wrong.

    The whole experience was a lot of fun. I helped foreign ministries with their Meraki onboarding. I had a pleasure of assisting an animation studio that played a great role in my life growing up with their WiFi troubles. I troubleshot client VPN issues for my favorite watch brand, discovering an unexpected behavior, and worked with the product team to improve the customer experience.

    I don’t think anyone has doubts that Meraki’s hardware is solid. Meraki is a part of Cisco, they know how to build networking equipment. In many ways, they are Networking. But other vendors make solid networking equipment as well. And I would argue that Meraki’s secret sauce is not the hardware, it’s the Meraki Dashboard.

    Meraki Dashboard is a management plane for the Meraki equipment (duh). It allows for full management control of the equipment. In fact, there’s very little you can do on the Meraki devices locally: just set up an IP address, and get a SDB (support data bundle). And I experienced its power.

    As an “enterprise. at home” enthusiast, I wanted to lay my hands on a Meraki device, their MX security appliance, in particular. After hours of deliberations on which MX to get, I got a Z4 teleworker gateway. The Z-series appliances are also known as “baby MX”, so I think it was fitting.

    On the way home, sitting in the airport with nothing better to do, I decided to configure my yet-to-be-received teleworker gateway. I created a Meraki Dashboard organization, a network, and went ahead configuring it. I created VLANs, decided on the IP subnetting scheme, created SSIDs and firewall rules.

    A few days later, when I got my Z4 (every time I say Z4, I think of BMW, it’s really a shame that they’re discontinuing their Z4), I claimed it to my organization and assigned it to the network I created while at the airport. And that’s it. All I needed to do was to plug a network cable into the port labeled “Internet” and to provide power. The Z4 pulled the configuration from the dashboard and was up and running. I guess I still needed to plug it in, so not “true” zero-touch provisioning. Maybe 0.2-touch provisioning? Rounding down it would be zero-touch provisioning, good enough for me.

    This experience inspired me to explore the Meraki platform for the homelab application further. I’ll do a few posts on how it fits into my work flow, what works, what doesn’t, and the overall experience with the platform. We’ll probably start with the device onboarding, then will go into IPSec VPN and, its magical sibling, Meraki Auto-VPN.

    OK, this post’s been long enough, I better wrap it up. Till next time…

  • Firewalls and DDNS.

    In the SSH post, I showed a command

    sudo ufw allow from AAA.BBB.CCC.DDD to any port ssh proto tcp

    to open up access to your Debian based SSH server through ufw. In this tutorial we’ll be using 2 machines: our server which we’re accessing using SSH, and our workstation. It’s pretty simple in the home LAN environment, replace AAA.BBB.CCC.DDD with your workstation’s IP address, like 192.168.1.201, or whatever it is, and you’re done. But what if the server is remote and your workstation’s home ISP only provides you a dynamic IP address? That’s where DDNS comes in.

    DDNS, or Dynamic DNS, or DynDNS, is a method of updating the DNS record that points to your network. When your ISP changes your external IP address (when your router reboots, for example), a DDNS client can send the new address to a DDNS provides who updates the DNS record. Some home routers have this functionality built in and they monitor your IP address and update DDNS as needed. But they usually work with a limited number of providers. There are also software clients that you can run on your system, ddclient, for example. And you can run it on any (almost) machine in your network as your router has only one external IP address regardless of the number of internal devices it’s routing to. Let’s not think about the situations with dual WAN, or if you restrict a machine to route all traffic through a VPN. If you’re doing that you probably can figure out how to set DDNS up.

    But don’t want to install additional packages and would rather achieve the same result with a help of Bash scripts. We’ll be setting up two scripts. One on the workstation that will keep the DNS record up to date, and another one on the server that will look at the DNS record and update its firewall to allow SSH access only from that IP address. We need the second script as ufw does not work with the domain names but only with the IP addresses. So we’ll need a way to resolve the domain name into the IP and tell ufw to allow that IP.

    If you’re going with the implementation on your router, use the services programmed in it. But I will be of no help to you as I don’t know what router you have. Instead, I’ll focus on how to update your DNS record from a Linux machine.

    So how do we set DDNS up? First, we need a FQDN, or a Fully Qualified Domain Name, like itsfreeatleast.blog (or itsfreeatleast.com). Since we want it for free, I suggest we use DuckDNS. It’s a free DDNS service that allows you to have up to 5 domains (such as example.duckdns.org). DuckDNS even gives you specific instruction on how to create a script that automatically updates your DNS record here.

    Important fields to note on the DuckDNS page (after you login) are the domain and the token, example and abcdef0-1234-5678-9101-112131fedcba, respectively.

    Now let’s create a script to update the DuckDNS record. We’ll save the script somewhere your user has access to and name it something like duckdnsupdate.sh. You don’t need to be root to run this script, so the home directory is fine. Adjust the values for the domain <example> and the token <abcdef0-1234-5678-9101-112131fedcba> to the ones shown for you. Use your favorite text editor, I prefer nano.

    nano $HOME/duckdnsupdate.sh

    Copy and paste the following script:

     #!/bin/bash
    
    DOMAIN = <example>
    TOKEN = <abcdef0-1234-5678-9101-112131fedcba>
    
    OLD_IP=$(/usr/bin/host $DOMAIN.duckdns.org | /usr/bin/awk '/has address/ { print $4 }')
    if [[ "$OLD_IP" =~ ^(([1-9]?[0-9]|1[0-9][0-9]|2([0-4][0-9]|5[0-5]))\.){3}([1-9]?[0-9]|1[0-9][0-9]|2([0-4][0-9]|5[0-5]))$ ]]; then
        :
    else
      /usr/bin/echo "Failed getting Old IP. Using a dummy address."
      OLD_IP="10.111.111.111"
    fi
    /usr/bin/echo "Old IP is $OLD_IP"
    
    NEW_IP=$(/usr/bin/curl -s --connect-timeout 20 https://checkip.amazonaws.com)
    if [[ "$NEW_IP" =~ ^(([1-9]?[0-9]|1[0-9][0-9]|2([0-4][0-9]|5[0-5]))\.){3}([1-9]?[0-9]|1[0-9][0-9]|2([0-4][0-9]|5[0-5]))$ ]]; then
      /usr/bin/echo "New IP is $NEW_IP"
    else
      /usr/bin/echo "Failed getting new IP"
      exit 1
    fi
    
    if [ "$NEW_IP" = "$OLD_IP" ]; then
      /usr/bin/echo "IP is already correct"
    else
    /usr/bin/echo url="https://www.duckdns.org/update?domains=$DOMAIN&token=$TOKEN&ip=" | /usr/bin/curl -s --connect-timeout 20 -k -o /tmp/.duckdns/duck.log -K - && /usr/bin/echo "New IP updated!" || /usr/bin/echo "New IP Update Failed!"
    fi
    
    exit 0

    Exit with CTRL + X, Y to save changes, and Enter.

    Now, let’s break it down.

    First, we query a DNS server for the IP address of our domain and assign it to a variable OLD_IP with

    OLD_IP=$(/usr/bin/host $DOMAIN.duckdns.org | /usr/bin/awk '/has address/ { print $4 }')

    Then we check whether the value looks like a legit IP address:

    if [[ "$OLD_IP" =~ ^(([1-9]?[0-9]|1[0-9][0-9]|2([0-4][0-9]|5[0-5]))\.){3}([1-9]?[0-9]|1[0-9][0-9]|2([0-4][0-9]|5[0-5]))$ ]];

    If it is fine, we continue, if not (it timed out, or connection is down, for example), we use a “dummy” IP address. It can be anything.

    Next, we get our external IP address. For that we query https://checkip.amazonaws.com. We use the curl command and we also want to include a connection timeout (20 seconds) and the same validity check as with the OLD_IP.

    NEW_IP=$(/usr/bin/curl -s --connect-timeout 20 https://checkip.amazonaws.com)
    if [[ "$NEW_IP" =~ ^(([1-9]?[0-9]|1[0-9][0-9]|2([0-4][0-9]|5[0-5]))\.){3}([1-9]?[0-9]|1[0-9][0-9]|2([0-4][0-9]|5[0-5]))$ ]];

    If everything goes fine, we assign the IP to a variable NEW_IP, if not, we exit the script.

    Then, we compare the OLD_IP (the one the DNS knows) with the NEW_IP (our current external IP). If they match we print "IP is already correct" on the screen and exit. If they don’t match, we send our new IP to DuckDNS and update the record for our domain like so:

    /usr/bin/echo url="https://www.duckdns.org/update?domains=$DOMAIN&token=$TOKEN&ip=" | /usr/bin/curl -s --connect-timeout 20 -k -o /tmp/.duckdns/duck.log -K - && /usr/bin/echo "New IP updated!" || /usr/bin/echo "New IP Update Failed!"

    This command includes a 20 second timeout and prints out whether the IP update was successful or not. It also places the status update in the file /tmp/.duckdns/duck.log.

    Next we’ll make sure only our user owns the file (the chown command) and we’ll the file executable (the chmod command). I also like to tighten down the permissions just a bit.

    chown $(id -un):$(id -gn) $HOME/duckdnsupdate.sh
    chmod 500 $HOME/duckdnsupdate.sh

    Now let’s automate it. We’ll periodically check what the DNS server thinks our IP is, compare it with ours, and update, as needed. I think 15 minute interval is fine, but you can use whatever value you find acceptable. Updating it too frequently places additional burden on the DuckDNS servers, which is not a nice thing to do. Updating it too infrequently will make you wait longer if your external IP updates. We’ll use the good old crontab for this. You can use the systemd timers, but crontab is fine for this.

    Use the following command to edit your crontab. It may ask you which text editor to use, again, I prefer nano.

    crontab -e

    Add the following lines to the bottom (make sure you change <username> to your user, in other words, make sure you use the absolute path to the script duckdnsupdate.sh.

    @reboot /usr/bin/sleep 5 && /home/<username>/duckdnsupdate.sh > /dev/null 2>&1
    */15 * * * * /home/<username>/duckdnsupdate.sh > /dev/null 2>&1

    Save and exit with CTRL + X, Y, and Enter.

    Now, let me explain. The first line tells crontab that after reboot it should wait for 5 seconds and execute the script. The wait is just to ensure the workstation got the internet connection. You may adjust this value. The second line calls the script every 15 minutes. The syntax of crontab is weird, so you can use a generator. > /dev/null 2>&1 is to suppress any text output as we don’t need it.

    I prefer to use absolute paths in crontab. It is better for security and here’s why.

    If you run the command

    echo $PATH

    it will show you everything that is contained in the global environmental variable PATH, like so:

    /usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games

    When you call a command, let’s say ls, Linux looks through the paths in the environmental variable PATH for the binary called ls. It does so folder by folder in the order from let to right. On my system, and probably on yours too, ls is located in /usr/bin/ls. You can verify it by running which ls command. So, if a malicious actor is able to access the folder /usr/local/bin and place a malicious file named ls in there, next time you call the ls command, the system will execute /usr/local/bin/ls instead of /usr/bin/ls. The malicious actor can even add folders to the beginning of the PATH variable and store their malicious binaries there. This can lead to privilege escalation and a whole bunch of heartburn. So if you don’t use the absolute variables and the malicious actor performs this type of attack, next time your crontab runs, it will execute the malicious binary. Scary stuff…

    OK, we’re done with our workstation. Now it updates the DNS record for example.duckdns.org to our IP address keeps periodic checks for changes.

    Now, let head to our server and allow SSH from only our IP address. We’ll do the steps similar to the workstation setup. We’ll create a script file and add it to crontab. We will need to run the script as root, though, because only root can update the firewall parameters. OK, let’s go.

    sudo nano /root/ufwallow.sh

    Adjust the hostname and the SSH port (default 22) to match your setup. The log file location can stay as is.

    #!/bin/bash
    HOSTNAME=<example>.duckdns.org
    LOGFILE=/root/ufwallowlog
    SSH_Port=<22>
    
    Current_IP=$(/usr/bin/host $HOSTNAME | /usr/bin/awk '/has address/ { print $4 }')
    if [[ "$Current_IP" =~ ^(([1-9]?[0-9]|1[0-9][0-9]|2([0-4][0-9]|5[0-5]))\.){3}([1-9]?[0-9]|1[0-9][0-9]|2([0-4][0-9]|5[0-5]))$ ]]; then
    #    :
      /usr/bin/echo "IP for $HOSTNAME is $Current_IP"
    else
      /usr/bin/echo "Failed getting IP"
      exit 1
    fi
    
    if [ ! -f $LOGFILE ]; then
        /usr/sbin/ufw allow from $Current_IP to any port $SSH_Port proto tcp
        /usr/bin/echo $Current_IP > $LOGFILE
    else
    
        Old_IP=$(/usr/bin/cat $LOGFILE)
        if [ "$Current_IP" = "$Old_IP" ] ; then
            /usr/bin/echo "IP address has not changed"
        else
            /usr/sbin/ufw allow from $Current_IP to any port $SSH_Port proto tcp
            /usr/sbin/ufw delete allow from $Old_IP to any port $SSH_Port proto tcp
            /usr/bin/echo $Current_IP > $LOGFILE
            /usr/bin/echo "ufw has been updated"
        fi
    fi

    Exit with CTRL + X, Y to save changes, and Enter.

    Here’s the breakdown:

    Current_IP=$(/usr/bin/host $HOSTNAME | /usr/bin/awk '/has address/ { print $4 }')
    if [[ "$Current_IP" =~ ^(([1-9]?[0-9]|1[0-9][0-9]|2([0-4][0-9]|5[0-5]))\.){3}([1-9]?[0-9]|1[0-9][0-9]|2([0-4][0-9]|5[0-5]))$ ]];

    This section gets the current IP address of our workstation using DNS and it checks if the number makes sense. If it does, it assigns it to Current_IP, if not, it exits.

    Then it checks if the file /root/ufwallowlog exits and is not empty. Actually, it check the inverse of that, the if statement is true if the file does not exits. This file keeps track of what IP ufw allows SSH access from. If the file does not exit, we just add the rule to ufw.

    if [ ! -f $LOGFILE ]; then
        /usr/sbin/ufw allow from $Current_IP to any port $SSH_Port proto tcp
        /usr/bin/echo $Current_IP > $LOGFILE

    If it does exit, we read the value from it and compare it to the Current_IP address.

        Old_IP=$(/usr/bin/cat $LOGFILE)
        if [ "$Current_IP" = "$Old_IP" ] ;

    If they’re the same, the script just tells us so "IP address has not changed". If it has changed, the script allows access from Current_IP and deletes the rule allowing Old_IP. It then overwrites the log file with the Current_IP for use next time.

    /usr/sbin/ufw allow from $Current_IP to any port $SSH_Port proto tcp
    /usr/sbin/ufw delete allow from $Old_IP to any port $SSH_Port proto tcp
    /usr/bin/echo $Current_IP > $LOGFILE
    /usr/bin/echo "ufw has been updated"

    After creating the file we’ll make sure it’s owned by root and only root can read and execute it:

    sudo chown root:root /root/ufwallow.sh
    sudo chmod 500 /root/ufwallow.sh

    It’s probably a good idea to make sure only root can write to the log file:

    sudo chown root:root /root/ufwallowlog
    sudo chmod 600 /root/ufwallowlog

    Now, let’s add it to root‘s crontab the instructions to run this script periodically:

    sudo nano crontab -e

    add to the end:

    @reboot /usr/bin/sleep 5 && /root/ufwallow.sh > /dev/null 2>&1
    */15 * * * * /root/ufwallow.sh > /dev/null 2>&1

    Again, you can adjust the parameters as you see fit.

    And this is it…

    OK. It’s a lot of words. But now we have a fully qualified domain name which is being updated from our workstation with our external IP address. Our remote SSH server periodically queries a DNS server to resolve the IP of our workstation and makes sure the IP can access the SSH server. Updates are made as necessary. What I noticed, though, that DuckDNS sometimes is slow to resolve the domains into IP addresses. It happens on some corporate networks. Maybe next time we’ll look at the alternatives. But hey, it’s free, at least…