Category Archives: Homelab

Updates about my homelab

One inch patch cables

Continuing on my slow and steady quest to have an organized networking closet, I recently finished wiring up my patch panel on my 6U “wall-mount” networking rack that’s sitting on top of a pair of IKEA LACK side tables in my office closet. (I used the Cable Matters 1U Cat6 keystone patch panel, which can be found here on Amazon). Previously, I just had all of my apartment’s ethernet cables somehow routed into the closet and terminated at the front of my network switch. It didn’t take long – probably only a few minutes of browsing r/homelab until I realized that this was not nearly the most beautiful way to organize my networking closet. So I set forth on the journey to use a patch panel…

To change from direct ethernet cables to a patch panel, I needed to get:

  • Several keystone connectors. The patch panel I got is a “keystone patch panel” which requires ethernet cables to be terminated with a keystone jack before being inserted into the patch panel. (I got these “slim” Cable Matters ones)
  • Several short ethernet cables which I could use to connect the patch panel to my switch. Sure, I could manually create 16 short cables out of a longer cable, but that sounds like a lot of work. (I ended up buying 4 5-packs of Cable Matters CAT6 cable which can be found on Amazon here)
  • Some way to lengthen my existing ethernet cables without replacing the entire cable. For example, the cable which goes to my desk is the perfect length to reach the front of the switch – but too short to reacharound to the back of the patch panel. Instead of replacing the entire cable, I’d like an easy way to join two cables together. (I ended up getting some Cable Matters “keystone couplers” which can be found here on Amazon)

In all, re-routing the cables didn’t end up being that much work. Using the keystone couplers to lengthen cables was easy. But I’m still not very good at the process of keystone-terminating ethernet cables.. each new termination with its process of stripping and untwisting and punching down took 30 minutes. The whole process took the better part of two days. Despite not testing my cables in the process, I managed to somehow crank out a 100% success rate.

The mess behind my patch panel. The patch panel has “ziptie connectors” that I should probably make use of to clean this up.

Unfortunately, two of my ethernet cables were special “slim” cables which I did not want to strip and attempt to wire into a keystone. Luckily, the keystone couplers I purchased also functioned as a poor-man’s keystone jack, so I just used those for two cables.

After trying to plan in advance and pre-label my patch panel, I was able to connect the 16 keystoned connectors into the rear of the patch panel. I’m not sure if I did it entirely correctly because it still looks a little messy back there – but I did it. I was finally ready to make my patch panel look like all the beautiful r/homelab posts. I used 1 foot ethernet cables to connect the patch panel to my switch. This took some re-working of my already labeled patch panel to get the aesthetics just right. The POE ports on my switch are on the left, so I needed to make sure that all PoE devices were terminated on the left side of the patch panel to avoid crossing patch panels.

The homelab rack after making use of the patch panel – much cleaner!

In all, the rack now looks pretty neat… but there are still things I’d like to improve. For example, the 1 foot patch cables do seem a little too long for the switch ports which are directly below the patch panel. Additionally, this doesn’t seem to actually offer much reparability benefits. For example, if I needed to introduce a new device to my home network, I would need to introduce two new cables instead of just one.. and if I wanted to maintain grouping on the patch panel (living room ports, PoE ports, etc), then I might have to shift over all the patch panel ports to make room. Indeed, the patch panel seems optimized for frequently changing network equipment while the actual ethernet runs remain unchanged – maybe useful in datacenters, but not so much homelabs.

But overall, I’m now proud to show off my network rack and I had fun tweaking. Now all I need is a rackmount UPS, an air conditioning unit dedicated to my networking closet, and to switch my 3 tower servers to rackmount servers. Oh, I also need to upgrade my network to 10 Gig. The fun never ends…

2023 Homelab Layering

Homelab Overview

My homelab architecture as of October 2022

As we go into 2023, I thought that I would share the layering of my homelab – which I have spent admittedly far too much time on. In general, I spend a lot of time dealing with complex systems during my dayjob. When I’m at home, I try to keep my systems as simple as possible so that I don’t have to spend hours re-learning tooling and debugging when things go wrong. Even if this costs some extra money, it’s worth it because it saves time.

My “homelab” is split into four layers: workstations (actual computers on desks), compute (servers which run applications), storage (server which hosts disks), network (routers and switches).

If I try to get more rigorous about the definitions, I arrive at:

  • Workstations are the client-side interfaces to storage/compute which also have local storage and compute to host games and other local processing (eg photo editing, video editing, word processing, etc). These workstations are backed up to the Storage layer regularly.
  • Compute is the server-side hosting infrastructure. All applications run on “compute”. The compute should probably be using the storage layer as its primary storage, but due to speed considerations (game servers), the compute layer uses its own local storage and backs up regularly to the storage layer.
  • Storage is the server-side storage infrastructure. All storage applications (NFS, SMB) are hosted on storage as well as some other applications which should probably be moved to compute, but are currently on storage due to a lack of research (eg Synology Photos, Synology Drive).
  • Network is the network backbone which connects the workstations/compute/storage and routes to the Internet. My network currently supports 1GBPS.

In my homelab, I’ve decided to split each layer into separate machines just to make my life easier. Workstations are physical computers on desks (eg Minibox or brandon-meshify), Compute is a single machine in my closet, Storage is a combination of a Synology 920+ (for usage) and a Synology 220+ (for onsite backups of the 920+), and Network is a combination of Unifi hardware (Dream Machine Pro, Switch16, 2 APs).

It’s definitely possible to combine this layering – I could imagine a server which performs both compute and storage or maybe a mega-server which performs compute and storage as well as networking. But for my own sanity and for some extra reliability, I’m choosing to host each layer separately.

Dependencies Between Layers

Furthermore, each layer can only depend on the layers below it.

  • The network is the most critical layer and every other layer depends on the network. In cases where the network must depend on other layers, workarounds must be present/documented.
  • Storage depends on the network, but otherwise has no dependencies. If storage is down, we can expect compute and workstations to not operate as expected.
  • Compute depends on storage and network; however, since the compute layer is by definition where applications should be hosted, there are a few instances of applications which lower layers might depend on.
  • Workstations are the highest layer and depend on all other layers in the stack.

Again, I have these rules to keep the architecture simple for my own sanity. If my NAS goes down, for example, I can expect basically everything to break (except the network!). I’m okay with that because creating workarounds is just too complicated (but definitely something I would do in my dayjob).

As I mentioned in the bullet points; there are some exceptions.

  • The compute layer runs pi-hole to perform DNS-level adblocking for the entire network. For this to work, the IP Address of pi-hole is advertised as the primary DNS server when connecting to my home network. Unfortunately, this means that if the compute layer is down, clients won’t be able to resolve IP addresses; I need to manually adjust my router settings and change the advertised DNS Server.
  • The compute layer runs Wireguard to allow me to connect to my network from the outside world. However, if the compute layer is down then Wireguard goes down which means that I won’t be able to repair the compute layer from a remote connection. As a backup, I also have a L2TP VPN enabled on my router. Ideally, my router would be running a Wireguard server since Wireguard is awesome – but the Unifi Dream Machine Pro doesn’t support that natively.
  • My brain depends on the compute layer because that’s where I host wikijs. If the compute layer goes down, my wiki becomes inaccessible (that’s also where I have wiki pages telling me how to fix things). To mitigate this, I use wikijs’s built in functionality to sync articles to disk and I sync them directly to the storage layer. If compute goes down, I should be able to access the articles directly from storage (although probably with broken hyperlinks).

My Fears

Overall, I’m happy with this homelab setup and it’s been serving me well for the past year. I’m a pretty basic homelabber and my homelab isn’t even close to the complex setups I see on YouTube and Reddit. My homelab is running a few game servers (Valheim, Ark, Minecraft), Media Hosting (Jellyfin), and some utilities (pi-hole, wikijs) all under proxmox. This is not nearly as crazy as the folks who have tons of security cameras, the *arr stack to automagically and totally legally grab media on demand, or folks who use their homelab to perform their actual compute-heavy dayjobs.

That said, I’m still worried about the lack of general high-availability in my homelab. For example: if my single UPS for all layers dies during a power outage, if my single compute host dies, if any of my networking equipment die. Any of these issues will knock out my homelab for several days while I wait for replacement hardware to arrive.

But then again, that’s probably okay – having multiple compute nodes or multiple network nodes is a huge added expense with perhaps minimal upsides. After all, I’m not running 24/7 services for customers – it’s just me and my family.

Definitely a work in progress…

Using a Synology NAS: 1 year later

My Synology 920+ and custom-built “compute”

My Long Relationship with Dropbox

For basically as long as I’ve been doing my own software development, I’ve used “the cloud” to store a lot of my data. Specifically, I used Dropbox with the “packrat” option which backed up all of my data to the cloud and even kept track of per-file history. This worked very well – I stored all of my important information in my dropbox folder and when I got a new PC I would just sync from dropbox. When I was away from the PC, I could always access my important documents through the Dropbox App. Dropbox even gave me a reason to avoid learning how to use proper version control because I could just write code directly in Dropbox and let Packrat keep track of the rest.

The list of Dropbox pros is long – but after a decade of using it, I started to run into some problems:

  • Removal of the “public” folder: For the first 5 years of using Dropbox, I made heavy use of the “Public” folder. This folder would allow you to share files with anyone on the internet. I personally used it as a sort of FTP server to host all of my software and updates. Once the public folder was removed, I migrated to a proper FTP server which was much harder to use and diminished the value of Dropbox’s offering.
  • Cost: With Packrat, I was paying $15 per month. In total, I’ve paid ~$1,800 to Dropbox to store my files for the past decade. This is a lot of money – and it’s always been worth it for me to not have to manage things myself, but it’s something to consider.
  • Proliferation of Cloud Services: There are many cloud services available – and all seem to specialize in different things. Google Drive is great for photos and google docs, OneDrive is great for word documents, and Dropbox is great for everything else. Using all of these services meant that my files were never in one place. This isn’t a Dropbox problem – this is more of a me problem – but it was a problem nonetheless.
  • Security: Finally, Dropbox is nice for things that aren’t very private; but I found myself storing passwords and other important information in my Dropbox for lack of a more secure solution. Not good.
The cost of dropbox: I don’t have complete billing history, but I’ve definitely paid more than $1,800 in total

The Synology 920+

The issues with Dropbox nagged at me for a while, but I didn’t do anything about it until Black Friday of 2021 when I purchased a Synology 920+ with 4 Seagate Ironwolf 4TB drives on a whim. I didn’t have a plan – but I had the vague idea that I’d get rid of my Dropbox subscription entirely.

And that’s exactly what I did. The software that comes with the NAS is pretty nice (Synology DSM), so I was able to quickly setup RAID on my drives with single drive fault tolerance and began migrating Google Photos to Synology Photos, my media off of my desktop drive and only the Synology (to be consumed with Jellyfin), and my Google Drive and OneDrive files migrated.

The migration went smoothly, but was basically an archaeological dig through all of my old stuff. I found old screenshots (embarrassing), gamesaves (impressive), and plenty of old schoolwork that I didn’t care about across the 12 harddrives that I had lying around.

I haven’t thought through a proper full review of the 920+, but I’ve been pretty happy with it. The stock fans are a little noisy (fixable, but “voids the warranty”), the stock RAM is not quite enough (fixable, but “voids the warranty” if you buy RAM that’s not Synology branded), and the plastic case leaves a bit to be desired. But functionality wise, I’m happy with it and I’m also happy that I got an “out of the box” software experience and have no need to go digging for other software to manage my photos and files.

Once the migration was complete, I was able to cancel my Dropbox subscription. In all, I paid $1,115 for the hardware which is quite the lump sum (that would have paid for 6 years of Dropbox!) – but I now have 5 times the raw storage and more flexibility to store whatever I want with the peace of mind that it’s not being stored on someone else’s server.

It’s the homelab bug

But my adventure did not stop with buying a Synology. As soon as I got my files switched over to the Synology, I decided I wanted to use the built in Docker features to manage Minecraft, Valheim, and Jellyfin servers – but I soon found that running all 3 of these at once wasn’t a challenge that the 920+ was particularly equipped for…

So I ended up building another system to serve as my “compute” and relegated the Synology to be just storage (I’m hoping to have another post on that “other” system later).

And now that I had these two servers, I needed to buy a fancy network switch to hook them together and then buy a fancy network router to hook to the switch and then fancy APs to enable my wifi…

And even now, I have the aching feeling that running services (eg Photos) from the Synology is not a great idea – instead, I should move the Photos app to the compute layer. And once I do that, I’ll probably need to get a few more servers to ensure high availability for the apps I run.

Oh, what have I done?

The elephant in the room: backups

One of the major selling points of getting a Synology is eliminating the costly monthly bills that you have to pay to cloud providers. The elephant in the room with my new “homelab” is offsite backups which I am currently using Backblaze B2 to achieve. Backblaze B2 requires a monthly subscription, however, so I’m back to where I started: paying a monthly subscription for cloud services.

That said, the monthly subscription is cheaper and I only use it for automated backups (haven’t had to recover yet!).

So what’s the takeaway?

The pessimistic takeaway is that I’ve spent a lot of time and money building Cloud services myself. Besides security, I don’t think there are any actual tangible benefits to doing this – and I wouldn’t recommend it to folks who are looking for storage of a few terabytes.

I’d only recommend this if you have huge amounts of data or if you’re doing it for fun. I guess in my case, I’m having “fun”.

Migrating my VPS to Linode and Sentora

Background

If you’ve been following my recent ventures at all, you know that not too long ago I decided to move my virtual private server to DigitalOcean. I also decided to run CentOS7 and use Centos Web Panel (CWP) to allow anyone using my VPS to have a nice graphical interface to manage their services. I have not taken time to formally review either of these services, but I did experience some issues with Centos Web Panel. Some of the key issues I had with the product are:

  • Obfuscated Code. I know that some people like to obfuscate their code in order to make the program uneditable by other parties; however, if I am unable to look at the code of my web-panel to see exactly what it’s doing, I am worried that the web-panel is doing something that it doesn’t want me to see. On top of this, obfuscated code makes it very, very difficult to quickly edit files if I want to make changes.
  • Support is not good. The CWP forums, which should be a central source of information for all who choose to adopt CWP into their own servers, is riddled with posts describing common issues. Almost all of the threads contain an administrator response prompting the original poster to file a ticket. This creates a forum that is essentially useless since no actual problem solving takes place there. On top of this, when an administrator does take the time to answer a question, the answer is usually not completely answering the question. To be fair, for a free product, support is something that doesn’t need to be offered; however, a community-driven forum is important, and CWP is not running theirs properly.
  • Things break. More often than not, users will make changes that unintentionally break things, and these breakages are reported as “random occurrances” by the very users who caused them. Maybe I am just a really poor user; however, during my time with CWP, things would definitely seem to randomly break. People on the forums seemed to agree, too, as all issues that I had were also reported by other users. For instance, after a CWP update, all hosts could no longer navigate to their associated “/roundcube” domains to access webmail. The problem was reported on the forums; however, it was left unanswered.
  • Untemplatable. I don’t only use my VPS to run my web services and applications, I also allow other people to host their websites, services, and applications on my VPS. When they login to their web panel, however, I want them to be reminded that they are using my VPS. Thus, branding the web panel is essential. CWP supported this in no friendly manner. Most web panels have a sort-of template feature, where one is able to define their own template for the web panel to use. However, CWP has no such thing, and even when looking for source images to replace and source files to edit, nothing can be easily found. If something is found, it is obfuscated. Most people would consider this to be a minor issue, however, the default CWP template is slow and unresponsive. The unresponsiveness was probably the straw that broke the camel’s back, so to speak.

With all of these small issues combined, and with a growing favoritism toward Linode’s VPS services, I decided that it was once again, time to make the transition to a new stack of services. This time, I decided to use Linode’s VPS plan, run CentOS7 with the LAMP stack, and finish it off with the free and open source Sentora web panel.

The Process

Preparing the VPS

Setting up the actual VPS with Linode is quite simple. I went with the Linode 4096 plan (Which is the same price as a Digital Ocean plan with less features) and deployed a CentOS7 image onto it. From there, after the machine is booted, the “Remote Access” tab provides all the necessary information to SSH into the machine and install the necessary software.

Installing Sentora

The Sentora installation process is fairly straightforward and fairly well documented. Setup with a fresh new Linode VPS, all of the requirements regarding a minimally installed machine were already met. However, installing Sentora requires a valid subdomain to be pointing towards the server machine. This subdomain will later be used to access the Sentora web panel.

At the time of installing Sentora, I did not have an extra domain lying around to point to the new machine, and knowing very little about DNS and DNS Records, I eventually discovered that what I needed to do was create a new “A” Record that pointed a subdomain of an already existing domain to the new machine. I edited my DNS Zone file to contain a record that looked as such:

www.panel.    1800    IN    A    123.456.789.246

Which mapped the “panel” subdomain to the IP Address (123.456.789.246) of the new machine. After waiting about 15 minutes for this new DNS information to propagate, I was able to ping panel.domain.com and receive responses from the new machine’s IP Address.

Note that at the time, this process was two-fold: Since I was transferring from DigitalOcean, I changed my DNS Zone File on my DigitalOcean host to contain a NS entry that told the DigitalOcean host to “Look for the server that is serving the panel subdomain here”. That entry looked as such:

panel    1800    IN    NS    123.456.789.246

Then, now that the DigitalOcean host knew where to look, the “A” record was added to the Linode host, which basically said “The IP Address for the panel subdomain is this”.

And with that, the pre-installation steps of Sentora are complete. The real installation can begin. As per the documentation, this includes downloading and running a Sentora installation script. The installation script is extremely straightforward and asks only about timezone, hostname (the subdomain that was just declared), and IP Address.

Configuring Sentora

I decided that the best user-architecture for my server would be one that treats every domain as an individual user, even the domains that belonged to myself. Although I could have chosen to attach my domains to the admin account, I wanted to create a separation layer that would allow each website to have its own configuration. On top of this, configuring my websites as “clients” of my website hosting would allow me to better support those who are actually clients of my hosting.

Thus, I added two new reseller packages that would take care of this. The BasicUser package was limited to 10 domains, subdomains, MySQL databases, email accounts, etc, but was given unlimited storage and bandwidth. I figured that 10 was more than enough for the clients of my web hosting. I also created the ElevatedUser pacakge, which had unlimited everything. This, of course, would be for my own domains.

Before moving on to actually creating the users, however, I wanted to customize the Sentora panel so that my brand would be reflected across the panel for my clients. I am a fan of the base Sentora theme, so I decided to basically use that theme, but replace all Sentora images with my own logos. This was done in the following steps:

  1. Copy the “Sentora_Default” folder (The base template) to another folder (Your new template) in /etc/sentora/panel/etc/styles
  2. Replace all images in the img/ folder with your logo
  3. Replace information in /etc/sentora/panel/etc/static/errorpages with your information
  4. Continue searching for things to replace in /etc/sentora/panel/etc

Then, once everything is replaced, the new theme is visible in the Sentora theme manager. Select it and voila!

User Creation and Migration

I created each user’s account using the Sentora admin panel. I would setup a generic username and password for them to use, and before alerting them of their new account information, I would do the following:

  1. Login to the account and create a new domain, which was their domain.
  2. Edit the DNS Records for that domain (The default DNS Records are perfect)
  3. Add a new FTP account (with a generic username and password), set to the root directory for their account.

These three things would allow the user’s information to be transferred from the old server to the new server relatively easily. Transferring users is also a relatively easy process. First, I logged in as the root SFTP account on my old DigitalOcean/CWP configuration and downloaded everyones’ files onto my local machine. This, of course, took some time and allowed me to discover that some of my users were hosting a lot more than I previously thought they were (one user was hosting a 10GB MongoDB Instance). I also was able to go through and delete some of the files that I didn’t think would be used anymore, such as log files, backups from my previous server migration (From HostGator to DigitalOcean), and old files that were not in use anymore.

Once all of the files were downloaded to my local machine, I went ahead and uploaded all of them using the newly created user FTP accounts, being sure to put all of the public_html files into the new public_html directory.

I also logged into phpMyAdmin using the root login credentials. This allowed me to find all MySQL accounts attached to each user. For each user, I logged into the Sentora web panel and created MySQL databases with names matching the old database names. Although this could have been accomplished by simply exporting the tables, exporting the tables would not allow users to see their MySQL databases in their Sentora panel. Thus, creating the database in Sentora would allow the users to see their MySQL databases in the panel. I also created MySQL users for each existing user on the old MySQL servers. Unfortunately, Sentora doesn’t allow users to me named in the same format as my old setup, so I had to change the naming format a bit.

Now that all the databases were created, I could go into the old phpMyAdmin and export all tables using phpMyAdmin’s EXPORT functionality, taking care to make sure that the “CREATE TABLE” statements were included. I saved the .sql files, and then used phpMyAdmin on the new Linode host to import these tables.

WIth all database information transferred succesfully, the final change I made was establishing users’ subdomains. I was able to look through the DNS Zone file on the old DigitalOcean/CWP host and find the subdomains for each user. I then logged into each user’s Sentora panel and created the subdomain using Sentora’s Subdomain functionality. Unfortunately, Sentora’s subdomain functionality only creates a directory for that subdomain, so I also had to edit the DNS Information, adding a CNAME entry for each subdomain that was to be added. The CNAME entry looked like:

subdomain     IN    CNAME    123.456.789.246

 

More Database Migration

The most glaring roadbloack with transferring MySQL databases is that there is no way to actually view the password of a user. Although this is a great security feature, it causes problems when migrating servers becasue one is unable to view the passwords of old users to put them into new users.

The solution to this problem that I chose to use utilized mysqldump. When run from the old VPS host, this tool would dump all the important data pertaining to the old MySQL databases, including users and passwords. The full command looked as such:

mysqldump -u root -p mysql > mysql.sql

After typing in the MySQL root user’s password, this creates a mysql.sql file that can be downloaded and then uploaded to the new host, where it can be restored using

mysql -u root -p mysql < mysql.sql
UPDATE user SET password=PASSWORD('<rootPassword>') WHERE user='root';
FLUSH PRIVILEGES;

This command restores all of the users, their passwords, and their permissions to the server. The only problem with using this command is that, for me at least, it changed some information of important users such as the MySQL Root user, Postfix user, ProFTPd user, and Roundcube user. This was due to the fact that those database users just happened to have the same names as they did on my previous server. So, of course, they were overwritten. Thus, with that being said, the above command also contains a line to keep the root MySQL user password in check.

Luckily, Sentora stores a text file that keeps all of the default MySQL user passwords in /root/passwords.txt. Thus, using this file as reference, I went ahead and updated all of the users passwords accordingly. For example:

mysql -u root -p
UPDATE user SET password=PASSWORD('<MySQL Postfix Password>') WHERE user='postfix';
UPDATE user SET password=PASSWORD('<MySQL ProFTPd Password>') WHERE user='proftpd';
UPDATE user SET password=PASSWORD('<MySQL Roundcube Password>') WHERE user='roundcube';
FLUSH PRIVILEGES;

After these updates, everything on the system was working properly. However, I will admit that it took me a long time to figure out that importing my mysqldump file actually changed these passwords.

And with that, all MySQL user accounts were restored and working!

Small Adjustments

Now that all the major elements were transferred from the old server to the new server, I sat back and casually navigated to as many parts of the websites as I could. Mostly everything worked. I also followed certain steps to transfer MyBB instances. Those steps can be found here.

Another issue I was having involved a PHP script I had written to get the name of the song that I was currently listening to (via Last.FM) for my blog. For whatever reason, the script was not executing and was simply acting as plain text. After several hours of troubleshooting, I realized that the default Sentora installation disallows PHP’s short tags (So <?php is required instead of <?). This issue was fixed by editing the php.ini file in /etc/php.ini and seting short_open_tag field to “On”.

Finally, since I thought that some of my users would dislike navigating to panel.mydomain.com when they wanted to edit information about theirdomain.com, I created a simple HTML file with a Javascript redirect, called it index.html, and put it in the public_html/panel directory for each of the primary domains being hosted on my server. In this way, theirdomain.com/panel would take them to the right place and not force them to type my domain into the web browser.

The Final Step for users’ websites was for them to direct their domains to the new host. I directed users to use their domain registrar to register nameservers for their domains, ns1.domain.com and ns2.domain.com and have them point to the new host IP Address. Then, they directed their domains to the newly registered nameservers of the same domains. With this, the domains now pointed to the new host and all DNS configuration changes made on the new host would be picked up and reflected in the domain.

The Final Moment

With that, the migration was complete. In order to ensure that everything worked properly, I shut off my DigitalOcean host and went to bed. Thankfully, when I woke up in the morning, all websites still worked and pinging them returned a fancy IPv6 IP Address.

I ended up encountering some issues with SSL, Antivirus, and Malware later on, but I will cover those in a later post.

Reverse DNS (rDNS) and DigitalOcean

As you may know from my previous post, I recently made the switch to Digital Ocean for my VPS needs. So far, everything has been great. However, I encountered a problem the other day regarding Reverse DNS (rDNS) for my VPS. The error occurred when I attempted to send an email to someone with a @cox.net email address. I got a reply containing a failure message. The message contained details about an invalid rDNS name.

After searching the Digital Ocean forums, I learned that by default, Digital Ocean configures rDNS for all hosting accounts. I couldn’t, for the life of me, figure out why mine was not working.

Then I read a small post that stated something along the lines of

In order for rDNS to be configured properly, the name of your droplet must be the same name as the URL that is being used to point to the droplet’s IP.

After changing the name of the droplet to reflect the primary domain name, everything worked out and emails started sending.

So now you know: Name your droplets with their corresponding URLs.