Targeting Server Core for Monitoring with PowerShell - SentryOne Team Blog

Targeting Server Core for Monitoring with PowerShell

At SentryOne, we're always focused on performance, and helping you get the maximum performance out of your server systems. One of the features of Windows Server that I think provides both performance and security benefits is the introduction of the Core option, first introduced in Windows Server 2008. Microsoft started supporting SQL Server on Windows Server Core with the SQL Server 2012 release.

There are a number of reasons you should be considering Server Core for your operating systems, but one is the fact that it helps break what I consider bad habits on the part of administrators. I've found that it's too easy for admins to log into a server running SQL Server, for example, and use the Explorer window on the server to copy or move large files, like backup files, from one server to another. The problem with this is that Windows allocates memory for the size of the file to perform the move, and then doesn't free it back up for SQL Server to use, until the server is rebooted. That memory is effectively unavailable for the buffer cache, which would help make query processing far more efficient.

Windows PowerShell has been around for over ten years now, and provides a straightforward way to manage all aspects of Windows Server, especially Server Core. One of its features that's been available since v2 is PowerShell Remoting. Remoting allows you to run PowerShell commands (known as cmdlets) on one or more remote servers at a time. You can either use Enter-PSSession to connect to a remote machine, and do your work there, or you can use the Invoke-Command cmdlet to send commands and scripts to one or more servers, where they'll be executed.

I've built a demo environment for testing and demonstrating various features of our SQL Sentry product. That environment consists of five VMs: a domain controller (WS16DC), a monitoring service server (WS16MS0), two SQL Server servers (WS16SQL1 and WS16SQL2), and a client machine (W10CLT). The client machine is running Windows 10 Enterprise, and the servers are running Windows Server 2016. The two SQL Server VMs are running on Windows Server Core.

To set up the environment, I added two domain accounts: sqlexec, which is the service account for SQL Server, and s1svc, which is the service account used by the SentryOne Monitoring Service. I added the s1svc account to both WS16SQL1 and WS16SQL2 as a local administrator, to allow it to collect Windows performance data, and as part of the sysadmin role in SQL Server, to allow it to collect SQL Server performance data.

Based on the online documentation on port availability, I needed to open up the ports for the proper collection of data by the monitoring service. I used PowerShell Remoting to do this, using the Invoke-Command cmdlet, which will run the same command in parallel on up to 64 servers:

Invoke-Command -ComputerName WS16SQL1,WS16SQL2 -ScriptBlock {New-NetFirewallRule -DisplayName "Windows Performance Counter port 445" -Direction Inbound -Protocol TCP -LocalPort 445 -Action Allow}

Invoke-Command -ComputerName WS16SQL1,WS16SQL2 -ScriptBlock {New-NetFirewallRule -DisplayName "WMI port 135" -Direction Inbound -Protocol TCP -LocalPort 135 -Action Allow}

Invoke-Command -ComputerName WS16SQL1,WS16SQL2 -ScriptBlock {netsh int ipv4 set dynamicport tcp start=50000 num=255}

The third command used netsh to set the dynamic ports because there are still some limitations with the Set-NetTCPSetting cmdlet that make it not fully equivalent with netsh yet. Hopefully this will be corrected soon.

After setting the ports, I copied the setup file to the monitoring service server, WS16MS0. This server has the full graphical installation of Windows Server 2016, because our setup program requires it. I ran the setup program, disabling the Client and Documentation options, because I'll run the SentryOne client from W10CLT. During the installation, I also specified WS16SQL2 as the host for the SentryOne repository database, and the s1svc account as the service account for the monitoring service.

Once that was complete, I copied the setup program to the W10CLT machine for the client installation. That ran smoothly, and I was able to add my license key, and the client then asked me for the initial target server. I entered WS16SQL1, and after a pause, was presented with this error message:

I clicked OK, and the Add Target dialog then finished testing the connections from the monitoring service, which all passed. I clicked Next, and got this error message:

After acknowledging that, the process completed, and all the monitoring I expected was working normally. But I was puzzled why those messages came up. In working with our QA and Dev team, we turned on verbose logging, and Chris Briggs pointed me to this error in the log:

The RPC server is unavailable.

Research on this error message pointed me to similar issues people have had with using the Computer Management dialog to connect to servers running Core. The discussions pointed me to two "DisplayGroup" values that need to be enabled in the Windows Firewall on the target servers. Those values are:

  • Windows Management Instrumentation (WMI)
  • Remote Event Log Management

The latter group is required to allow the MMC Snapin for Windows Management to allow you to connect to remote servers. These values have to be enabled in the firewall on the target machine, which brings us back to PowerShell Remoting. I used the following commands to enable these firewall settings.

Invoke-Command -ComputerName WS16SQL1,WS16SQL2 -ScriptBlock {Set-NetFirewallRule -DisplayGroup "Windows Management Instrumentation (WMI)" -Enabled True}

Invoke-Command -ComputerName WS16SQL1,WS16SQL2 -ScriptBlock {Set-NetFirewallRule -DisplayGroup "Remote Event Log Management" -Enabled True}

Once these rules were applied to both servers, I was able to add both servers as targets without error.

Comments ( 0 )

    Leave A Comment

    Your email address will not be published.