DoiT Wins Google Cloud Global Sales Partner of the Year Award – Learn more

Automatically Mounting NFS Volumes to a Linux Instance

1 fl0wtntavgxuf3yqbi5owg

A few weeks ago I ran into, what I thought at the time, a very unique use case on migrating a lot of VMs in my home lab to using Kubernetes: automatically mounting a NFS volume to a Linux machine or Docker container.

This was so I could migrate the backing store of InfluxDB, MySQL, and Grafana to be stored on my NAS and not on the local filesystem of the device it was running on. I use NFS quite a bit on my network for allowing file sharing to multiple devices due to its simplicity and device support.

After getting this setup in my home lab I started to realize that this was not a very unique use case after all due to the large amount of usage that NFS has in the world. It could be used in a cloud environment to mount an on-premise NFS share to pull data, to allow processes access to an NFS share for reading or writing data, allow mounting of user’s home directories from NFS, and countless other use cases.

The Problem with and the Key to Making Auto-Mounting Work

I first went the route I knew best which was to use fstab which I have used on and off for the past few decades, but discovered very quickly that it wasn’t mounting the NFS mount fast enough for the database daemons to pick it up and this was causing some major issues.

This is when I came across a Linux package called autofs. Getting it running took a lot of experimentation as many guides online were out of date or missing very important details on setting this up. This guide will just give you how to do it so you don’t have to spend half a day figuring it out, because time is something those of us in the industry never have enough of.

At its core autofs is just a daemon that automatically mounts and unmounts shares as needed in the background. Unlike fstab it does this as requested so it can do it during boot without having to worry about the order of daemons starting up.


Prerequisites

For the sake of simplicity, in this exercise, I am going to assume you already have an NFS server setup on your NAS or Linux machine or that you are using something like Google Cloud’s Filestore to act as a NAS.

Make sure you have the full NFS share paths for each share you wish to add. Sometimes they are not what you expect so double-check them, for instance on Synology NASes they have the volume prefixed onto them such as /volume1/share_path.

I will also be making this Debian-derivative distribution-specific on commands, so this should work with Debian, Ubuntu, Kali, etc. and will be adding in each section the appropriate command for Red Hat-derivate distribution commands so those of you running RHEL or CentOS won’t have to go translating.

Please note in this guide I will not be covering NFS security practices as this is a very large topic. It would add a level of complexity that could very well double the length of this guide so I am omitting it. It IS very important, don’t get me wrong, but I am not covering it in this article and I recommend you read up on it to integrate it into this setup once you have it setup in a manner that best suits your organization. If you wish to know more, I would recommend starting with this great article by Red Hat covering the basics of it here.


The Process

  1. Install the autofs package by running the following commands for your distribution:
    Ubuntu: sudo apt -y install nfs-common autofs
    Red Hat: sudo yum -y install nfs-common autofs
  2. Open up the file /etc/auto.master in your favorite editor.
  3. Scroll to the end of the file and add a line such as this for each mount you would like to add changing the word share in auto.share to your preferred name for the share:
    /- /etc/auto.share -nosuid,noowners
    (If you prefer go ahead and add more spaces between for easier reading, Medium prevents more than a single space in a code block in a numbered list.)
  4. Next you will need to create each file you referenced in the previous step. These will all be located in /etc and be of the format auto.[share_name]. So for each auto.share line you added above you will need to open up that file in your favorite text editor to create it. When in there put this line in there substituting in your share name (my personal preference is to keep all in /mnt so you can modify that to your liking), server name, and share path:
    /mnt/[share_name] -fstyle=nfs,user,nolock,nosuid,rw [server_name]:[share_path]
    (Note again if you prefer add more spaces in there for readability. Also if you are wishing for it to be read-only versus read-write change the rw above to ro.)
  5. Once this as been completed it is time to restart the autofs service by running the following command:
    Ubuntu: sudo service autofs restart
    Red Hat: sudo systemctl restart autofs
  6. Next it’s time to make sure the service started properly and there were not any syntax, network, or other errors in the startup. Run the following command to see the status:
    Ubuntu: sudo service autofs status
    Red Hat: sudo systemctl status autofs
  7. In the output from the following step, it should tell you the status is running and all is well. If not there the output will show the last few lines of the logs to assist you in debugging the issue. I have also added a section at the end with some basic debugging steps.
  8. At this point, you can do a cd command to reach the directories you put in the files above to access the shares. Note you won’t have to make those directories, the autofs daemon creates them for you automatically.
  9. One last verification you can do is run the mount command which will list out all mounts on the instance. Any easy way to find all of the ones created by autofs is to run the following command mount | grep autofs.
  10. From here the mounts will auto load on the system and reattach when needed.
  11. At this point, I would highly recommend looking into NFS security and securing your mounts then implementing the best method of security for the usage.

Debugging Common Issues

During the course of learning this, I hit a few snags that I want to list out and how I debugged them.

The first one I hit was the daemon throwing all sorts of errors about not connecting to NFS. It turned out to be that Synology appends the volume number in front of the share name which I was not aware of at the time. I diagnosed this by trying to manually mount the NFS share into a folder in my home directory by using the following command mkdir tmp_mnt && sudo mount -v -r -o user,nolock,nosuid [server_name]:[share_path] tmp_mnt which mounts the share to a local directory. This should give you a success or an error which states what is going on. To unmount this run the following command sudo umount tmp_mnt.

Another error I hit was with the server not being accessible by its hostname. This issue turned out to be an internal DNS issue where it was not resolving the hostname. I diagnosed this by being able to hit the share by its IP address and then did a nslookup hostname to determine it was unable to translate the IP address to the hostname. Note for this you may need to install the bind-utils (for Red Hat sudo yum -y install bind-utils) or dnsutils (for Debian sudo apt install -y dnsutils) packages.

The last issue I had is Kubernetes specific. I was unable to access any external service, including NFS shares, that lived outside of the cluster from any pod in the cluster. It turned out that it was because it was not able to resolve the hostname from DNS because the cluster had no knowledge of my internal DNS server that lived outside the cluster. The theory behind all of this is well outside of the bounds of this guide so I will simply give a solution and a link to being able to read more up on it.

  1. Edit the CoreDNS configmap using the following command kubectl edit configmap coredns -n kube-system.
  2. This will load up a text editor with the yaml file for CoreDNS in it, inside of there is a section called Corefile with a block of yaml that starts with .:53.
  3. Determine your internal domain. This will be determined by what your internal DNS server is setup for, if you do not have an internal DNS server just use local.
  4. Determine your DNS server’s IP address. If you do not have one then this will more than likely be the IP address of your router.
  5. Add the following code block, with substituting your information, below that on a new line (make sure you have spaces in there and not tabs!), save, and exit the editor:
[domain_name]:53 {
                errors
                cache 30
                forward . [dns_server]
}

More information on why this works can be found here.

Subscribe to updates, news and more.