Mar 142020
 
DUO

Want to see DUO Two-Factor Authentication in action? I’ve created a number of demo videos showing DUO 2FA being used on numerous different platforms. You can see how DUO works with these platforms, and the experience a user can expect when using two-factor authentication from Duo Security.

Duo 2FA is a great way to secure your environment whether it’s servers, workstations, VDI, firewalls, or even WordPress!

And remember, I sell Duo licensing and provide consulting services to help set it up!

The following video playlist contains:

Video demo playlist

Duo Security Two-Factor Authentication Product Demo

Feel free to reach out to me if you need a quote, want to buy, or need help implementing Duo Security Two-Factor Authentication!

Don’t forget to check out my corporate blog post to “Secure your business and enterprise IT systems with Multi Factor Authentication (MFA)“!

For more content on my blog on Duo Security, visit: https://www.stephenwagner.com/category/duo-mfa/

For more content on my corporate blog on Duo Security, visit: https://www.digitallyaccurate.com/blog/category/vendors/duo-security/

To visit Duo Security’s website, visit: https://duo.com/

May 062018
 
DUO

I’m a big fan of MFA, specifically Duo Security‘s product (I did a corporate blog post here). I’ve been using this product for some time and use it for an extra level of protection on my workstations, servers, and customer sites. I liked it so much so that my company (Digitally Accurate Inc.) became a partner and now resells the services.

Here’s a demo of DUO MFA being used with CentOS Linux:

Today I want to write about a couple issues I had when deploying the pam_duo module on CentOS Linux 7. The original duo guide can be found at https://duo.com/docs/duounix, however while it did work for the most part, I noticed there were some issues with the pam configuration files, especially if you are wanting to use Duo MFA with usernames and passwords, and not keys for authentication.

A symptom of the issue: I noticed that when following the instructions on the website for deployment, after entering the username, it would skip the password prompt, and go right for DUO authentication, completely bypassing the password all together. I’m assuming this is because the guide was written for key authentication, but I figured I’d do a quick crash-course post on the topic and create a simple guide. I also noticed that sometimes even if an incorrect password was typed in, it would allow authentication if DUO passed as successful.

Ultimately I decided to learn about PAM, understand what it was doing, and finally configure it properly. Using the guide below I can confirm the password and MFA authentication operate correctly.

To configure Duo MFA on CentOS 7 for use with usernames and passwords

First and foremost, you must log in to your Duo Account and go to applications, click “Protect an Application” and select “Unix Application”. Configure the application and document/log your ikey, secret key, and API hostname.

Now we want to create a yum repo where we can install, and keep the pam_duo module up to date. Create the file /etc/yum.repos.d/duosec.repo and then populate it with the following:

[duosecurity]
name=Duo Security Repository
baseurl=http://pkg.duosecurity.com/CentOS/$releasever/$basearch
enabled=1
gpgcheck=1

We’ll need to install the signging key that the repo uses, and then install the duo_unix package. By using yum, we’ll be able to keep this package regularly up to date when we update the server. Run the following commands:

rpm --import https://duo.com/RPM-GPG-KEY-DUO
yum install duo_unix

Configure the pam_duo module by editing the /etc/duo/pam_duo.conf file. You’ll need to populate the lines with your ikey, secret key, and API hostname that you documented above. We use “failmode=safe” so that in the event of an internet disconnection, we can still login to the server without duo. It’s safe to enable this fail-safe, as the purpose is to protect it against the internet. Please see below:

[duo]
; Duo integration key
ikey = XXXXXXXXXXXXXXXXXXXX
; Duo secret key
skey = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
; Duo API host
host = XXXXXXXXX.duosecurity.com
; Send command for Duo Push authentication
pushinfo = yes
; failmode safe if no internet it works (secure locks it up)
failmode = safe

Configure sshd to allow Challenge Response Authentication by editing /etc/ssh/sshd_config, then locate and change “ChallengeResponseAuthentication” to yes. Please note that the line should already be there, and you should simply have to move the comment symbol to comment the old line, and uncomment the below line as shown below:

ChallengeResponseAuthentication yes

And now it gets tricky… We need to edit the pam authentication files to incorporate the Duo MFA service in to it’s authentication process. I highly recommend that throughout this, you open (and leave open) an additional SSH session, so that if you make a change in error and lock yourself out, you can use the extra SSH session to revert any changes to the system to re-allow access. It’s always best to make a backup and copy of these files so you can easily revert if needed.

DISCLAIMER: I am not responsable if you lock yourself out of your system. Please make sure that you have an extra SSH session open so that you can revert changes. It is assumed you are aware of the seriousness of the changes you are making and that you are taking all precautions (including a backup) to protect yourself from any errors.

Essentially two files are used for authentication that we need to modify. One file is for SSH logins, and the other is for console logins. In my case, I wanted to protect both methods. You can do either, or both. If you are doing both, it may be a good idea to test with SSH, before making modifications to your console login, to make sure your settings are correct. Please see below for the modifications to enable pam_duo:

/etc/pam.d/password-auth (this file is used for SSH authentication)

#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth        required      pam_env.so
auth        required      pam_faildelay.so delay=2000000
#auth        sufficient    pam_unix.so nullok try_first_pass
auth        requisite     pam_unix.so nullok try_first_pass
auth  sufficient pam_duo.so
auth        requisite     pam_succeed_if.so uid >= 1000 quiet_success
auth        required      pam_deny.so

account     required      pam_unix.so
account     sufficient    pam_localuser.so
account     sufficient    pam_succeed_if.so uid < 1000 quiet
account     required      pam_permit.so

password    requisite     pam_pwquality.so try_first_pass local_users_only retry=3 authtok_type=
password    sufficient    pam_unix.so sha512 shadow nullok try_first_pass use_authtok


password    required      pam_deny.so

session     optional      pam_keyinit.so revoke
session     required      pam_limits.so
-session     optional      pam_systemd.so
session     [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session     required      pam_unix.so

/etc/pam.d/system-auth (this file is used for console authentication)

auth        required      pam_env.so
auth        sufficient    pam_fprintd.so
#auth        sufficient    pam_unix.so nullok try_first_pass
# Next two lines are for DUO mod
auth        requisite     pam_unix.so nullok try_first_pass
auth        sufficient    pam_duo.so
auth        requisite     pam_succeed_if.so uid >= 1000 quiet_success
auth        required      pam_deny.so

account     required      pam_unix.so
account     sufficient    pam_localuser.so
account     sufficient    pam_succeed_if.so uid < 1000 quiet
account     required      pam_permit.so

password    requisite     pam_pwquality.so try_first_pass local_users_only retry=3 authtok_type= ucredit=-1 lcredit=-1 dcredit=-1 ocredit=-1
password    sufficient    pam_unix.so sha512 shadow nullok try_first_pass use_authtok remember=5
password    required      pam_deny.so

session     optional      pam_keyinit.so revoke
session     required      pam_limits.so
-session     optional      pam_systemd.so
session     [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session     required      pam_unix.so

Now, we must restart sshd for the changes to take affect. Please make sure you have your extra SSH session open in the event you need to rollback your /etc/pam.d/ files. Restart the sshd service using the following command:

service sshd restart

Attempt to open a new SSH session to your server. It should now ask for a username, password, and then prompt for Duo authentication. And you’re done!

More information on Duo Multi Factor Authentication (MFA) can be found here.

May 312013
 

Back in February, I was approached by a company that had multiple offices. They wanted my company to come in and implement a system that allowed them to share information, share files, communicate, use their line of business applications, and be easily manageable.

Just an FYI, I provide Microsoft Small Business Server consulting services, including migrations! For more information, please visit https://www.stephenwagner.com/2020/02/28/microsoft-small-business-server-migration-upgrade/.

The Solution – Microsoft Small Business Server 2011

The first thing that always comes to mind is Microsoft Small Business Server 2011. However, what made this environment interesting is that they had two branch offices in addition to their headquarters all in different cities. One of their branch offices had 8+ users working out of it, and one only had a couple, with their main headquarters having 5+ users.

Usually when administrators think of SBS, they think of a single server (two server with the premium add-on) solution that provides a small business with up to 75 users with a stable, enterprise feature packed, IT infrastructure.

SBS 2011 Includes:

  • Windows Server 2008 R2 Standard
  • Exchange Server 2010
  • Microsoft SharePoint Foundation 2010
  • Microsoft SQL Server 2008 R2 Express
  • Windows Server Update Services
  • (And an additional Server 2008 R2 license with Microsoft SQL Server 2008 R2 Standard if the premium add-on is purchased)

Essentially this is all a small business typically needs, even if they have powerful line of business applications.

Additional Domain Controller on SBS

One misconception about Windows Small Business Server is the limitation of having a single domain controller. IT professionals often think that you cannot have any more domain controllers in an SBS environment. This actually isn’t true. SBS does allow multiple domain controllers, as long as there is a single forest, and not multiple domains. You can have a backup domain controller, and you can have multiple RODCs (Read Only Domain Controller), as long as the primary Active Directory roles stay with the SBS primary domain controller. You can have as many global catalogs as you’d like! As long as you pay for the proper licenses of all the additional servers 🙂

This is where this came in handy. While I’ve known about this for some time, this was the first time I was attempting at putting something like this in to production.

The Plan

The plan was to setup SBS 2011 Premium at the HQ along with a second server at the HQ hosting their SQL, line of business applications, and Remote desktop Services (formerly Terminal Services) applications. Their HQ would be sitting behind an Astaro Security Gateway 220 (Sophos UTM).

The SBS 2011 Premium (2 Servers) setup at the HQ office will provide:

  • Active Directory services
  • DHCP and DNS Services
  • Printing and file services (to the HQ and all branch offices)
  • Microsoft Exchange
  • “My Document” and “Desktop” redirection for client computers/users
  • SQL DB services for LoB’s
  • Remote Desktop Services (Terminal Services) to push applications out in to the field

The first branch office, will have a Windows Server 2008 R2 server, promoted to a Read Only Domain Controller (RODC), sitting behind an Astaro Security Gateway 110. The Astaro Security Gateway’s would establish a site-to-site branch VPN between the two offices and route the appropriate subnets. At the first branch office, there is issues with connectivity (they’re in the middle of nowhere), so they will have two internet connections with two separate ISPs (1 line of sight long range wireless backhaul, and one simple ADSL connection) which the ASG 110 will provide load balancing and fault tolerance.

The RODC at the first branch office will provide:

  • Active Directory services for (cached) user logon and authentication
  • Printing and file services (for both HQ and branch offices)
  • DHCP and DNS services
  • “My Documents” and “Desktop” redirection for client computers/users.
  • WSUS replica server (replicates approvals and updates from WSUS on the SBS server at the main office).
  • Exchange access (via the VPN connection)

Users at the first branch office will be accessing file shares located both on their local RODC, along with file shares located on the HQ server in Calgary. The main wireless backhaul has more then enough bandwidth to support SMB (Samba) shares over the VPN connection. After testing, it turns out the backup ADSL connection also handles this fairly well for the types of files they will be accessing.

The second branch office, will have an Astaro RED device (Remote Ethernet Device). The Astaro/Sophos RED devices, act as a remote ethernet port for your Astaro Security Gateways. Once configured, it’s as if the ASG at the HQ has an ethernet cable running to the branch office. It’s similar to a VPN, however (I could be wrong) I think it uses EoIP (Ethernet over IP). The second branch doesn’t require a domain controller due to the small number of users. As far as this branch office goes, this is the last we’ll talk about it as there’s no special configuration required for these guys.

The second branch office will have the following services:

  • DHCP (via the ASG 220 in Calgary)
  • DNS (via the main HQ SBS server)
  • File and print services (via the HQ SBS server and other branch server)
  • “My Document” and “Desktop” redirection (over the WAN via the HQ SBS server)
  • Exchange access (via the Astaro RED device)

Hardware

For all the servers, we chose HP hardware as always! The main SBS server, along with the RODC were brand new HP Proliant ML350p Gen8s. The second server at the HQ (running the premium add-on) is a re-purposed HP ML110 G7. I always configure iLo on all servers (especially remote servers) just so I can troubleshoot issues in the event of an emergency if the OS is down.

Implemenation

I’ll explain how this was all implemented.

  1. Configure and setup a typical SBS 2011 environment. I’m going to assume you already know how to do this. You’ll need to install the OS. Run through the SBS configuration wizards, enable all the proper firewall rules, configure users, install applicable server applications, etc…
  2. Configure the premium add-on. Install the Remote Desktop Services role (please note that you’ll need to purchase RDS CAL’s as they aren’t included with SBS). You can skip this step if you don’t plan on using RDS or the premium server at the main site.
  3. Configure all the Astaro devices. Configure a Router to Router VPN connection. Create the applicable firewall rules to allow traffic. You probably know this, but make sure both networks have their own subnet and are routing the separate subnets properly.
  4. Install Windows Server 2008 R2 on to the target RODC box (please note, in my case, I had to purchase an additional Server 2008 license since I was already using the premium add-on at the HQ site. (If you purchase the premium add-on, but aren’t using it at your main office, you can use this license at the remote site).
  5. Make sure the VPN is working and the servers can communicate with each other.
  6. Promote the target RODC to a read only domain controller. You can launch the famous dcpromo. Make sure you check the “Read Only domain controller” option when  you promote the server.
  7. You now have a working environment.
  8. Join computers using the SBS connect wizard. (DO NOT LOG ON AS THE REMOTE USERS UNTIL YOU READ THIS ENTIRE DOCUMENT)

I did all the above steps at my office and configured the servers before deploying them at the client site.

You essentially have a working basic network. Now to get to the tricky stuff! This tricky stuff is to enable folder redirection at the branch site to their own server (instead of the SBS server), and get them their own WSUS replica server.

Now to the fancy stuff!

1. Installing WSUS on the RODC using the add role feature in Windows Server: You have to remember that RODC’s are exactly what they say! !READ ONLY! (As far as Active directory goes)! Installing WSUS on a RODC will fail off the bat. It will report that access is denied when trying to create certain security groups. You’ll have to manually create these two groups in Active Directory on your primary SBS server to get it to work:

  • SQLServer2005MSFTEUser$RODCSERVERNAME$Microsoft##SSEE
  • SQLServer2005MSSQLUser$RODCSERVERNAME$Microsoft##SSEE

Replace RODCSERVERNAME with the computer name of your RODC Server. You’ll actually notice that two similiar groups already exist (with the server name different) for the existing Windows SBS WSUS install, this existing groups are for the main WSUS server. After creating these groups, this will allow it to install. After this is complete, follow through the WSUS configuration wizard to configure it as a replica for your primary SBS WSUS server.

2. One BIG thing to keep in mind is that with RODC’s you need to configure what accounts (both user and computer) are allowed to be “cached”. Cached credentials allow the RODC to authenticate computers and users in the event the primary domain controller is down. If you do not configure this, if the internet goes down, or the primary domain controller isn’t available, no one will be able to log in to their computers or access network resources at the branch site. When you promoted the server to a RODC, two groups were created in Active Directory: Allow RODC Cached Logins, and Deny RODC Cached Logins (I could be wrong on the exact name since I’m going off memory). You can’t just select and add users to these groups, you need to also select and add the computers they use as well since computers have their own “computer account” in Active Directory.

To overcome this, create two security groups under their respective existing groups. One group will be for users of the branch office, the other group will be for computers of the branch office. Make sure to add applicable users and groups as members of the security groups. Now go to the “Allow RODC Cached Logins” group created by the dc promotion, and add those two new security groups to that group. This will allow remote users and remote computers to authenticate using cached security credentials. PLEASE NOTE: DO NOT CACHE YOUR ADMINISTRATIVE ACCOUNT!!! Instead, create a separate administrative account for that remote office and cache that.

3. One of the sweet things about SBS is all the pre-configured Group policy objects that enable the automatic configuration of the WSUS server, folder redirection, and a bunch of other great stuff. You have to keep in mind that off of the above config, if left alone up to this point, the computers in the branch office will use the folder redirection settings and WSUS settings from the main office. Remote users folder redirection (whatever you have selected, in my case My Documents and Desktop redirection) locations will be stored on the main HQ server. If you’re alright with this and not concerned about the size of the user folders, you can leave this. What I needed to do (for reasons of simple disaster recovery purposes) is have the folder re-directions for the branch office users store the redirection on their own local branch server. Also, we need to have the computers connect to the local branch WSUS server as well (we don’t want each computer pulling updates over the VPN connection as this will use up tons of bandwidth). What’s really neat is when users open applications via RemoteApp (over RDS), if they export files to their desktop inside of RemoteApp, it’ll actually be immediately available on their computer desktop since the RDS server is using these GPOs.

To do this, we’ll need to duplicate and modify a couple of the default GPOs, and also create some OU (Organizational Unit) containers inside of Active Directory so we can apply the new GPOs to them.

First, under “SBSComputers” create an OU called “Branch01Comps” (or call it whatever you want). Then under “SBSUsers” create an OU called “Branch01Users”. Now keep in mind you want to have this fully configured before any users log on for the first time. All of this configuration should be done AFTER the computer is joined (using the SBS connect) to the domain and AFTER the users are configured, but BEFORE the user logs in for the first time. Move the branch office computer accounts to the new Branch office computers OU, and move the Branch office user accounts to the Branch office users OU.

Now open up the Group policy Management Management Console. You want to duplicate 2 GPOs: Update Services Common Settings Policy (rename the duplicate to “Branch Update Services Common Settings Policy” or something), and Small Business Server Folder Redirection Policy (rename the duplicate to “Branch Folder Redirection” or something).

Link the new duplicated Update Services policy to the Branch Computers OU we just created, and link the new duplicated folder redirection to the new users policy we just created.

Modify the duplicated server update policy to reflect the address of the new branch WSUS replica server. Computers at the branch office will now pull updates from that server.

As for Folder redirection, it’s a bit tricky. You’ll need to create a share (with full share access to all users), and then set special file permissions on the folder that you shared (info available at http://technet.microsoft.com/en-us/library/cc736916%28v=ws.10%29.aspx). On top of that, you’ll need to find a way to actually create the child users folders under that share/folder in which you created. I did this by going in to active directory, opening each remote user, and setting their profile variable to the file share. When I hit apply this would create a folder with their username with the applicable permissions under that share, after this was done, I would undo that variable setting and the directory created would stay. Repeat this for each remote user at that specific branch office. You’ll also need to do this each time you add a new user if they bring on more staff, you’ll also need to add all new computers and new users to the appropriate OUs, and security groups we’ve created above.

FINALLY you can now go in to the GPO you duplicated for Branch Folder redirection. Modify the GPO to reflect the new storage path for the redirection objects you want (just a matter of changing the server name).

4. Configure Active Directory Sites and Services. You’ll need to go in to Active Directory Sites and Services and configure sites for each subnet you have (you main HQ subnet, branch 1 subent, and branch 2 subnet), and set the applicable domain controller to those sites. In my case, I created 3 sites, and configured the HQ subnet and second branch to authenticate off the main SBS PDC, and configured the first branch (with their own RODC) to authenticate off their own RODC. Essentially, this tells the computers which domain controller they should be authenticating against.

And you’re done!

A few things to remember, whenever adding new users and/or computers to the branch, ALWAYS join using SBS wizard, add computer to the branch OU, add user to the branch OU, create the users master redirection folder using the profile var in the AD user object, and separately add both user and computer accounts as members of the security group we created to cache credentials.

And remember, always always always test your configuration before throwing it out in to production. In my case, I got it running first try without any problems, but I let it run as a test environment for over a month before deploying to production!

We’ve had this environment running for months now and it’s working great. What’s even cooler is how well the Astaro Security Gateway (Sophos UTM) is handling the multiple WAN connections during failures, it’s super slick!

Nov 282011
 

Just thought I’d do up a quick little post about an issue I’ve been having for some time, and just got it all fixed.

I’ve been running Astaro Security Gateway inside of a VMware environment for a few years. When version 8.x came out, I went ahead and simply attached the ISO to the VM and re-installed over the old v7 and restored the config. This worked great, and for the longest time I had no real issues.

I noticed from time to time that with packet sniffs, there was quite a few retransmissions and TCP segments lost. This didn’t really pose any issues, and didn’t cause any problems, however it was odd.

Recently, I had to configure a Site to Site IPsec VPN between my office, and one of my employees to provide exchange, VoIP, etc… With astaro this is fairly easy, few clicks and it should work simple, however I started noticing huge issues with file transfers, whether being transferred over SMB (Windows File Sharing), or SCP/SSH. Transfers would either completely halt when started, transfer a few couple hundred kilobytes, or transfer half of the file until it would simply halt and become unresponsive.

After 3-4 days of troubleshooting, I went ahead and did a packet sniff, noticed there were numerous TCP segments lost, fragmentation, etc… Initially I beleive that maybe MTU configuration may have had something to do with it, however TCP/IP and the Astaro device should have taken care of properly setting the MTU on the IPsec automatically.

After trying fresh installs of ASG, etc… and no behaviour change, I finally decided to take a few days away and give it a shot later. I’ve troubleshot this from every avenue and for some reason the issue is still existing. I finally figured that the only thing I haven’t checked was with my VMware vSphere environment. Checked the settings, all was good, however I did notice that the NICs for the ASG vm (which were created by the v7 appliance) were set as flexible, and inside of the VM were detected as some type of AMD network adapter. I found this odd.

After shutting down the ASG VM, removing the NICs and configuring new ones using E1000, all of a sudden the issue was fixed, the IPsec Site to Site VPN functioned properly, and all the network issues seen in network captures were resolved.

I hope this helps some other people who may be frustrated dealing with the same issue.

Apr 222010
 

Recently with the new vulnerabilities with Java, I needed to push the latest Java update remotely to all of my clients currently using my companies “Managed Services”.

The upgrade was being scheduled for certain dates per location, however as of Tuesday morning I noticed that some computers were being hit with some of the newer vulnerabilities recently discovered.

This all of a sudden changed the priority from “high priority” to “emergency”. I needed a  quick and efficient means of pushing this update to computers at client sites.

Active Directory allows system administrators to push, allow, or make available software installations to users. This is all controlled inside of Active Directory Group Policy Management.

To push the latest Java update to all computers on a network, I had to perform the steps below:

1. Download the “Offline Installation” of Java from the Java website. Open the file, do not proceed to continue the installation. (You will simply hit cancel after you extract the MSI and other files needed).

2. Open a explorer and browse to C:\Users\%USERNAME%\AppData\LocalLow\Sun\Java\jre1.6.0_20. After navigating to this location copy “Data1.cab”, “jre1.6.0_20.msi”, and “sp1033.MST” to a new folder (I chose a folder on my desktop).

3. Log into the remote server, create a file share (for example NetInstall), and configure users read access only.

4. Copy the folder you created on your desktop to the new file share on the server. Remember to use a naming scheme for the applications you wish to push so that they all make sense and can be organized.

5. On the server, go to Start -> Administrative Tools -> Group Policy Management

6. Either create a new GPO, or use an existing on that you have configured. If you are unfamiliar with this, it may be worth while doing some online research on GPOs. In my case I right clicked, and chose edit on the “Windows SBS Client Policy” GPO on SBS 2008.

7. Expand Computer Configuration, policies, Software Settings, Software installation. Right click on “Software Installation” and select new package. Follow the instructions.

8. When choosing the location of the .msi file, PLEASE make sure that you browse to it using your UNC network path. This location has to be somewhere where all the computers have access to. (I.E. don’t use C:\Folder\file.msi, you would rather use \\servername\sharename\programname\file.msi).

At this point you have now configured the server to force install Java on all the computers that apply to that GPO. This is perfect to make sure that all your clients are running the latest versions of free software available. It will also help with managing vulnerabilities with aging software, etc…

Please note: If this doesn’t work right away it is because the client workstations need to refresh their GPO. After the GPO is refreshed on the client workstation side, the system should install the package on next reboot.

There are some other neat things you can do with GPOs, and pushing applications on your network, however I’m not covering it in this document. For example instead of using “Computer Configuration”, you could use “User Configuration”, and instead of forcing applications you could actually make applications available for install through “Add/Remove Programs” for users to install.

Please always make sure that any applications you use are properly paid for and/or licensed.