Recent Discussions
Accessing the LogicMonitor REST API with Postman and LMv1 API Token Authentication
Introduction Postmanis widely used for interacting with various REST APIs such as LogicMonitor's. However, there is no out-of-the-box support for the LMv1 authentication method which we recommend as a best practice. This document describes how to configure Postman to use LMv1 authentication when interacting with our REST API. Overview Postman'spre-request script functionalityprovides the ability to generate the necessary Authorization header for LMv1 authentication. As the name suggests, the pre-request script runs immediately before the request is made to the API endpoint. We set the pre-request script at the collection level in Postman so that it will run automatically for every request that is part of the collection. The script requires three input parameters: a LogicMonitor API token (or ID), its associated key, and the full request URL. These parameters are made available to the script by creating a Postman environment and setting the values as environment variables. If you need to access multiple LogicMonitor accounts (portals), create a separate environment for each to store the applicable API and URL information. Since all API requests to a given account use the same base URL (https://<account>.logicmonitor.com/santaba/rest) it is convenient to store this as an environment variable. The output of the script is the value of the Authorization header. The script writes the header value to an environment variable which is then inserted as the Authorization header value in the request. Instructions 1. Download and installPostman. 2. Launch Postman andcreate a new collectionthat will be used for all LogicMonitor API requests. 3. In the create collection dialog, select the "Pre-request Scripts" section and paste in the following code. // Get API credentials from environment variablesvar api_id = pm.environment.get('api_id');var api_key = pm.environment.get('api_key'); // Get the HTTP method from the requestvar http_verb = request.method; // Extract the resource path from the request URLvar resource_path = request.url.replace(/(^{{url}})([^\?]+)(\?.*)?/, '$2'); // Get the current time in epoch formatvar epoch = (new Date()).getTime(); // If the request includes a payload, included it in the request variablesvar request_vars = (http_verb == 'GET'||http_verb == 'DELETE') ?http_verb + epoch + resource_path : http_verb + epoch + request.data + resource_path; // Generate the signature and build the Auth headervar signature = btoa(CryptoJS.HmacSHA256(request_vars,api_key).toString());var auth = "LMv1 " + api_id + ":" + signature + ":" + epoch; // Write the Auth header to the environment variablepm.environment.set('auth', auth); 4. Create anew environment. Create the environment variables shown below. You do not need to provide a value for the "auth" variable since this will be set by the pre-request script. Be sure to use the api_id, api_key, and url values appropriate for your LogicMonitor account. 5. Create arequestand add it to the collection with the pre-request script. A sample request is shown below with the necessary parameters configured. 1. Set the environment for the request, 2. Set the HTTP method for the request. 3. Use{{url}} to pull the base URL from the environment variable. Add the resource path and any request parameters your API request may require. 4. Add the Authorization header and set the value to{{auth}}to pull the the value from the environment variable. 5. POST, PUT, and PATCH requests only: if your request includes JSON data, be sure to select the Body tab and add it. 6. Press Send to send the request. The response will appear below the request in Postman. Troubleshooting You receive the response "HTTP Status 401 - Unauthorized" Confirm the following: • The proper environment has been specified for the request. • The necessary environment variables have been set and their values are correct. Note that the script relies on the specific variable names used in this document: "api_id", "api_key", "url", and "auth". • The request is a member of the collection configured with the pre-request script. Postman reports "Could not get any response" or "There was an error in evaluating the Pre-request Script: TypeError: Cannot read property 'sigBytes' of undefined" Make sure you have set the proper environment for the request and all necessary environment variables and values are present.Kurt_Huffman7 years agoFormer Employee3.3KViews42likes42CommentsHow WMI, DCOM, RPC and UAC effect access to remote Window Systems for Monitoring
...or how to get LogicMonitor pass WMI/DCOM Access Denied Messages I've been dealing with monitoring some systems that are either not joined to the domain or where the LM service account is not allowed to be a domain admin. There are guidesfrom LMand more general online posts for various workarounds but not always describing what exactly these changes are doing and which parts are really needed for your needs. So I put together this post on my notes. Please review the comments on this post for any corrections and comments. Note that I don't work for LogicMonitor so this is based on my understanding, research and testing. Please let me know if you see any mistakes or misunderstand anything. -Mike How LogicMonitor monitors basic Windows systems LogicMonitor uses WMI and WinRM ("Get-CIM*") requests to monitor Window Servers, with WMI being the most used. WMI requests over the network use DCOM technology which communicates over RPC on the network. To make WMI queries over the network, you need to have permissions within DCOM/RPCandWMI. Access to DCOM is controlled by dcomcnfg.exe and WMI access is controlled by Computer Management > Services and Applications > WMI Control. WinRM access is simply controlled by being in the Remote Management Users group. User Account Control (UAC) To complicate things further, UAC (User Account Control) is applied on both local and remote network access. UAC is designed to de-elevate any administrator user so they get normal user permissions. You can effectively think of it as removes your user from the Administrators group on-the-fly for your request. When working locally with UAC on, you can regain full administrator privileges by using "Run as Administrator" or disable UAC notifications in the control panel. Disabling UAC notifications will have the shell auto-elevate for you. Setting the UAC slider in Control Panel to "never notify" will not disable UAC but just remove the notification and auto-escalate locally. The slider does NOT affect remote UAC use and it will still apply in full force. The effect of UAC on remote connections will also depend on if the remote server is joined to a domain or not. When a server is joined to a domain, any local administrator (which includes domain admins) making remote calls will run with full administrators privileges. Basically, UAC will not affect remote network access on domains. When a remote server is not joined to a domain, just a workgroup, then UAC takes full effect on remote access for all local administratorsexceptthe built-in "Administrator" account. So if you use a dedicated monitoring account (as suggested) that is a member of the local Administrators group, UAC will effectively remove administrators group access from the monitoring account remote access. There isn't a way to "Run as Administrator" with WMI remote requests. UAC's Effect on DCOM and WMI By Default, recent Windows versions provide the following permissions (simplified for our purposes): * Remote DCOM: Administrators, Performance Log Users, Distributed COM Users * Remote WMI: Administrators So by default Windows only allows those with administrators privileges to access WMI remotely. If you are a normal user you will be blocked by DCOM before you get to WMI. But if you were part of the Performance Log Users or Distributed Com Users, you would get past DCOM. When remote UAC is in effect, UAC will remove your Administrators permissions as mentioned above. This causes you to be blocked by DCOM anyway unless you are also part of those two groups. Doesn't matter if you are part of the local Administrators group. There are two ways you can allow access: 1) Fully disable UAC by setting the EnableLUA registry value to 0 and reboot. This will fully disable UAC in Windows. This would stop UAC from removing administrator privileges, allowing access to anything that allows access to administrators. If you want to have full administrator access for monitoring I would suggest this option. 2) First modify WMI Control to allow your actual account to remotely query WMI. Then either add the account to the Performance Log Users or Distributed COM Users group, or modify DCOM permissions directly to allow your actual account to get remote access. Just assume you are not joined to the local Administrators group in this case. You will have more limited access to monitoring including LM DataSource "Files Services" and Windows Services monitoring not working (without the workaround talked about later). DCOM/RPC Ports Standard remote WMI queries use RPC to connect and RPC uses a mess of ports. First, the Collector would connect to the remote system over TCP 135. The remote system would then pick a high port and ask the Collector to use this new high port for future communications. The high port depends on the OS but current Windows uses ports 49152 thru 65535. If there is a firewall/router between the Collector the remote system and it's not RPC/WMI-aware (being statefull is not enough), you need to open all of those ports between the two. There is a way to modify Windows to limit the IP range but it would be global on that server. WinRM/PSRemote Access A few LogicMonitor DataSources use WinRM (PSRemote) instead of WMI, like the DHCP Server DataSources. This uses the WS-Man (Web Services for Management) protocol on TCP 5985 and 5986 instead of RPC. WinRM has its own set of permissions needed. So to include WinRM in our previous simplified default permissions list: * Remote WinRM/PSRemote: Administrators, Remote Management Users * Remote DCOM: Administrators, Performance Log Users, Distributed COM Users * Remote WMI: Administrators By default Windows only allows those with administrators privileges to access WinRM remotely. If you are a normal user (or UAC makes you into a regular user) you will be blocked by WinRM first before you get to DCOM or WMI. From what I can tell, using WinRM still requires access to WMI and DCOM. I have not experimented with this much. To allow access to WinRM you would add the user to the "Remote Management Users" group. As far as I know, there isn't a management console to control WinRM permissions and the user group is the official method to provide access without having Administrators access. Windows Services Access Monitoring Windows Services as non-admin (or with UAC removing admin) is especially tricky. By Default, recent Windows versions provide the following permissions to look at the Windows Services Controller (simplified for our purposes): * Authenticated users: Query Service Config * Interactive: Service Config + Service Status + Start Services + Read SACL * Service: Service Config + Service Status + Start Services + Read SACL * System: Service Config + Service Status + Start Services + Stop Services + Read SACL * Administrators: All Access * All Application Packages: Query Service Config You can add the following ACL to the existing SDDL string to give a LogicMonitor Service Account read access to most services: '(A;;CCLCRPRC;;;SID_HERE) A = Access Allowed CCLCRPRC = CC: Query Service Config = LC: Query Service Status = RP: Start Services = RC: Read security ACLs SID_HERE = Replace with the SID of the LogicMonitor Service Account I found that RP and RC permissions are required for the WMI request to work Each service can also have its own overriding ACL, so providing access to Windows Services Controller might not be enough. I avoid this workaround if I can. I kinda consider it a limitation of non-admin access and I'm hesitant about playing with per-service ACLs personally. Possible Fixes or Workaround Steps How to fully disable UAC: 1. Run the following single command in PowerShell (run as administrator) then reboot the server New-ItemProperty -Path HKLM:Software\Microsoft\Windows\CurrentVersion\policies\system -Name EnableLUA -PropertyType DWord -Value 0 -Force How to Modify WMI: 1. Computer Management > Services and Applications > WMI Control 2. Right-click and choose properties > Security tab 3. Choose Root then Security button 4. Add the local LogicMonitor service account and check the boxes for Allow Execute Methods and Remote Enable. 5. Click Advanced button > choose the service account > Edit 6. Change Applied to into "This namespace and subnamespaces". 7. Click OK on all the windows. 8. Restart the Windows Management Instrumentation service (and its dependencies). How to gain DCOM Access (suggested method): 1. Add local LogicMonitor service account to the "Performance Log Users" group. Note that adding the user to "Performance Monitor Users" will not provide DCOM access by default. How to gain WinRM Access: 1. Add local LogicMonitor service account to the "Remote Management Users" group.Mike_Moniz4 years agoProfessor2.9KViews24likes10CommentsExample script for automated alert actions via External Alerting
Below is a PowerShell script that's a handy starting point if you want to trigger actions based on specific alert types. In a nutshell, it takes a number of parameters from each alertand has a section of if/elsestatements where you can specify what to do based on the alert.It leverages LogicMonitor'sExternal Alertingfeature so the script runs local to whatever Collector(s)you configure it on. I included a couple of example actions forpinging a device and forrestarting a service.It also includes some handy (optional) functions for logging as well as attaching a noteto thealert in LogicMonitor. NOTE: this script is provided as-is and you will need to customize it to suit your needs. Automated actions are something that must be approached with careful planning and caution!! LogicMonitor cannot be responsible for inadvertent consequences of using thisscript. If you want try it out, here's how to get started: Update the variables in the appropriate section near the top of the script with optional API credentialsand/or log settings. Also change any of the if/elseif statements (starting around line #95) to suit your needs. Save the script onto your Collector server.I named the file"alert_central.ps1" but feel free to call it something else. Make note of it’s full path (ex: “C:\scripts\alert_central.ps1”). NOTE: it’s notrecommended to place it under the Collector's agent/lib directory (typically "C:\Program Files (x86)\LogicMonitor\Agent\lib") since that location can be overwritten by collector upgrades. In your LogicMonitor portal go to Settings, then External Alerting. Click the Add button. Set the 'Groups' field as needed to limit the actions to alerts from any appropriategroup of resources. (Be sure the group's devices would be reachable from the Collector running the script) Choose the appropriate Collector in the Collectorfield. Set Delivery Mechanismto "Script" Enter the name you saved the scriptas (in step #2)in theScriptfield (ex. "alert_central.ps1"). Paste the following into the Script Command Linefield (NOTE: if you add other parameters here then be sure to also add them to the 'Param' line at the top of the script): "##ALERTID##" "##ALERTSTATUS##" "##LEVEL##" "##HOSTNAME##""##SYSTEM.SYSNAME##" "##DSNAME##" "##INSTANCE##" "##DATAPOINT##" "##VALUE##" "##ALERTDETAILURL##" "##DPDESCRIPTION##" Example of the completed Add External Alerting dialog Click Save. This uses LogicMonitor's External Alerting featureso there are some things to be aware of: Since the script is called foreveryalert, the section of if/then statements at the bottom of the script is important for filtering what specific alerts you want to take action on. The Collector(s) oversee the running of thescript, so be conscience to any additional overhead the script actions may cause. It could take up to 60 seconds for the script to trigger from the time the alert comes in. This example is a PowerShell script so best suited for Windows-based collectors, but could certainly be re-written as a shell script for Linux-based collectors. Here's a screenshot of acleared alert where the script auto-restarted a Windows service and attached a note based on its actions. Example note the script added to the alert reflecting the automated action that was taken Below is the PowerShell script: # ---- # This PowerShell script can be used as a starting template for enabling # automated remediation for alerts coming from LogicMonitor. # In LogicMonitor, you can use the External Alerting feature to pass all alerts # (or for a specific group of resources) to this script. # ---- # To use this script: # 1. Update the variables in the appropriate section below with optional API and log settings. # 2. Drop this script onto your Collector server under the Collector's agent/lib directory. # 3. In your LogicMonitor portal go to Settings, then click External Alerting. # 4. Click the Add button. # 5. Set the 'Groups' field as needed to limit the actions to a specific group of resources. # 6. Choose the appropriate Collector in the 'Collector' field. # 7. Set 'Delivery Mechanism' to "Script" # 8. Enter "alert_central.ps1" in the 'Script' field. # 9. Paste the following into the 'Script Command Line' field: # "##ALERTID##" "##ALERTSTATUS##" "##LEVEL##" "##HOSTNAME##" "##SYSTEM.SYSNAME##" "##DSNAME##" "##INSTANCE##" "##DATAPOINT##" "##VALUE##" "##ALERTDETAILURL##" "##DPDESCRIPTION##" # 10. Click Save. # The following line captures alert information passed from LogicMonitor (defined in step #9 above)... Param ($alertID = "", $alertStatus = "", $severity = "", $hostName = "", $sysName = "", $dsName = "", $instance = "", $datapoint = "", $metricValue = "", $alertURL = "", $dpDescription = "") ###--- SET THE FOLLOWING VARIABLES AS APPROPRIATE ---### # OPTIONAL: LogicMonitor API info for updating alert notes (the API user will need "Acknowledge" permissions)... $accessId = '' $accessKey = '' $company = '' # OPTIONAL: Set a filename in the following variable if you want specific alerts logged. (example: "C:\lm_alert_central.log")... $logFile = '' # OPTIONAL: Destination for syslog alerts... $syslogServer = '' ############################################################### ## HELPER FUNCTIONS (you likely won't need to change these) ## # Function for logging the alert to a local text file if one was specified in the $logFile variable above... Function LogWrite ($logstring = "") { if ($logFile -ne "") { $tmpDate = Get-Date -Format "dddd MM/dd/yyyy HH:mm:ss" # Using a mutex to handle file locking if multiple instances of this script trigger at once... $LogMutex = New-Object System.Threading.Mutex($false, "LogMutex") $LogMutex.WaitOne()|out-null "$tmpDate, $logstring" | out-file -FilePath $logFile -Append $LogMutex.ReleaseMutex()|out-null } } # Function for attaching a note to the alert... function AddNoteToAlert ($alertID = "", $note = "") { # Only execute this if the appropriate API information has been set above... if ($accessId -ne '' -and $accessKey -ne '' -and $company -ne '') { # Encode the note... $encodedNote = $note | ConvertTo-Json # API and URL request details... $httpVerb = 'POST' $resourcePath = '/alert/alerts/' + $alertID + '/note' $url = 'https://' + $company + '.logicmonitor.com/santaba/rest' + $resourcePath $data = '{"ackComment":' + $encodedNote + '}' # Get current time in milliseconds... $epoch = [Math]::Round((New-TimeSpan -start (Get-Date -Date "1/1/1970") -end (Get-Date).ToUniversalTime()).TotalMilliseconds) # Concatenate general request details... $requestVars_00 = $httpVerb + $epoch + $data + $resourcePath # Construct signature... $hmac = New-Object System.Security.Cryptography.HMACSHA256 $hmac.Key = [Text.Encoding]::UTF8.GetBytes($accessKey) $signatureBytes = $hmac.ComputeHash([Text.Encoding]::UTF8.GetBytes($requestVars_00)) $signatureHex = [System.BitConverter]::ToString($signatureBytes) -replace '-' $signature = [System.Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($signatureHex.ToLower())) # Construct headers... $auth = 'LMv1 ' + $accessId + ':' + $signature + ':' + $epoch $headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]" $headers.Add("Authorization",$auth) $headers.Add("Content-Type",'application/json') # Make request to add note.. $response = Invoke-RestMethod -Uri $url -Method $httpVerb -Body $data -Header $headers # Change the following if you want to capture API errors somewhere... # LogWrite "API call response: $response" } } function SendTo-SysLog ($IP = "", $Facility = "local7", $Severity = "notice", $Content = "Your payload...", $SourceHostname = $env:computername, $Tag = "LogicMonitor", $Port = 514) { switch -regex ($Facility) { 'kern' {$Facility = 0 * 8 ; break } 'user' {$Facility = 1 * 8 ; break } 'mail' {$Facility = 2 * 8 ; break } 'system' {$Facility = 3 * 8 ; break } 'auth' {$Facility = 4 * 8 ; break } 'syslog' {$Facility = 5 * 8 ; break } 'lpr' {$Facility = 6 * 8 ; break } 'news' {$Facility = 7 * 8 ; break } 'uucp' {$Facility = 8 * 8 ; break } 'cron' {$Facility = 9 * 8 ; break } 'authpriv' {$Facility = 10 * 8 ; break } 'ftp' {$Facility = 11 * 8 ; break } 'ntp' {$Facility = 12 * 8 ; break } 'logaudit' {$Facility = 13 * 8 ; break } 'logalert' {$Facility = 14 * 8 ; break } 'clock' {$Facility = 15 * 8 ; break } 'local0' {$Facility = 16 * 8 ; break } 'local1' {$Facility = 17 * 8 ; break } 'local2' {$Facility = 18 * 8 ; break } 'local3' {$Facility = 19 * 8 ; break } 'local4' {$Facility = 20 * 8 ; break } 'local5' {$Facility = 21 * 8 ; break } 'local6' {$Facility = 22 * 8 ; break } 'local7' {$Facility = 23 * 8 ; break } default {$Facility = 23 * 8 } #Default is local7 } switch -regex ($Severity) { '^(ac|up)' {$Severity = 1 ; break } # LogicMonitor "active", "ack" or "update" '^em' {$Severity = 0 ; break } #Emergency '^a' {$Severity = 1 ; break } #Alert '^c' {$Severity = 2 ; break } #Critical '^er' {$Severity = 3 ; break } #Error '^w' {$Severity = 4 ; break } #Warning '^n' {$Severity = 5 ; break } #Notice '^i' {$Severity = 6 ; break } #Informational '^d' {$Severity = 7 ; break } #Debug default {$Severity = 5 } #Default is Notice } $pri = "<" + ($Facility + $Severity) + ">" # Note that the timestamp is local time on the originating computer, not UTC. if ($(get-date).day -lt 10) { $timestamp = $(get-date).tostring("MMM d HH:mm:ss") } else { $timestamp = $(get-date).tostring("MMM dd HH:mm:ss") } # Hostname does not have to be in lowercase, and it shouldn't have spaces anyway, but lowercase is more traditional. # The name should be the simple hostname, not a fully-qualified domain name, but the script doesn't enforce this. $header = $timestamp + " " + $sourcehostname.tolower().replace(" ","").trim() + " " #Cannot have non-alphanumerics in the TAG field or have it be longer than 32 characters. if ($tag -match '[^a-z0-9]') { $tag = $tag -replace '[^a-z0-9]','' } #Simply delete the non-alphanumerics if ($tag.length -gt 32) { $tag = $tag.substring(0,31) } #and truncate at 32 characters. $msg = $pri + $header + $tag + ": " + $content # Convert message to array of ASCII bytes. $bytearray = $([System.Text.Encoding]::ASCII).getbytes($msg) # RFC3164 Section 4.1: "The total length of the packet MUST be 1024 bytes or less." # "Packet" is not "PRI + HEADER + MSG", and IP header = 20, UDP header = 8, hence: if ($bytearray.count -gt 996) { $bytearray = $bytearray[0..995] } # Send the message... $UdpClient = New-Object System.Net.Sockets.UdpClient $UdpClient.Connect($IP,$Port) $UdpClient.Send($ByteArray, $ByteArray.length) | out-null } # Empty placeholder for capturing any note we might want to attach back to the alert... $alertNote = "" # Placeholder for whether we want to capture an alert in our log. Set to true if you want to log everything. $logThis = $false ############################################################### ## CUSTOMIZE THE FOLLOWING AS NEEDED TO HANDLE SPECIFIC ALERTS FROM LOGICMONITOR... # Actions to take if the alert is new or re-opened (note: status will be "active" or "clear")... if ($alertStatus -eq 'active') { # Perform actions based on the type of alert... # Ping alerts... if ($dsName -eq 'Ping' -and $datapoint -eq 'PingLossPercent') { # Insert action to take if a device becomes unpingable. In this example we'll do a verification ping & capture the output... $job = ping -n 4 $sysName # Restore line feeds to the output... $job = [string]::join("`n", $job) # Add ping results as a note on the alert... $alertNote = "Automation script output: $job" # Log the alert... $logThis = $true # Restart specific Windows services... } elseif ($dsName -eq 'WinService-' -and $datapoint -eq 'State') { # List of Windows Services to match against. Only if one of the following are alerting will we try to restart it... $serviceList = @("Print Spooler","Service 2") # Note: The PowerShell "-Contains" operator is exact in it's matching. Replace it with "-Match" for a loser match. if ($serviceList -Contains $instance) { # Get an object reference to the Windows service... $tmpService = Get-Service -DisplayName "$instance" -ComputerName $sysName # Only trigger if the service is still stopped... if ($tmpService.Status -eq "Stopped") { # Start the service... $tmpService | Set-Service -Status Running # Capture the current state of the service as a note on the alert... $alertNote = "Attempted to auto-restart the service. Its new status is " + $tmpService.Status + "." } # Log the alert... $logThis = $true } # Actions to take if a website stops responding... } elseif ($dsName -eq 'HTTPS-' -and $datapoint -eq 'CantConnect') { # Insert action here to take if there's a website error... # Example of sending a syslog message to an external server... $syslogMessage = "AlertID:$alertID,Host:$sysName,AlertStatus:$alertStatus,LogicModule:$dsName,Instance:$instance,Datapoint:$datapoint,Value:$metricValue,AlertDescription:$dpDescription" SendTo-SysLog $syslogServer "" $severity $syslogMessage $hostName "" "" # Attach a note to the LogicMonitor alert... $alertNote = "Sent syslog message to " + $syslogServer # Log the alert... $logThis = $true } } ############################################################### ## Final functions for backfilling notes and/or logging as needed ## (you likely won't need to change these) # Section that updates the LogicMonitor alert if 'alertNote' is not empty... if ($alertNote -ne "") { AddNoteToAlert $alertID $alertNote } if ($logThis) { # Log the alert (only triggers if a filename is given in the $logFile variable near the top of this script)... LogWrite "$alertID,$alertStatus,$severity,$hostName,$sysName,$dsName,$instance,$datapoint,$metricValue,$alertURL,$dpDescription" }1.6KViews23likes5CommentsUsing LogicMonitor's REST API as a Power BI Source
Overview LogicMonitor has a number of built-in report types that can be customized and sent out on a scheduled basis, including the powerful ability to turn any dashboard into a dynamic report.A common question for cloud-based services like LogicMonitor, however, is how to incorporate hosted data with information from other sources. An example may be a report that combines inventory data & monitoring metrics from LogicMonitor with incident data from systems such as ServiceNow. With Microsoft Power BI’s ability to easily parse and ingest JSON data directly from web services, it’s possible to create reports directly from LogicMonitor’s REST-based APIs without the need for intermediary automation or databases. Below are some basic steps to start pulling data directly from your LogicMonitor portal into a Power BI report. This isn’t meant to be a comprehensive reference though the concepts introduced here can be used for other report types generated directly from LogicMonitor data. Prerequisites The Microsoft Power BI Desktop software and knowledge of its usage for building reports. If needed, the software can be downloaded from the following link: https://www.microsoft.com/en-us/download/details.aspx?id=58494 A login for your LogicMonitor portal that has at least read-only permissions for the information to be included on the report. Basic familiarity with LogicMonitor’s REST APIs. The full API reference can be found at: https://www.logicmonitor.com/support/rest-api-developers-guide/overview/using-logicmonitors-rest-api Adding a LogicMonitor REST API as a Power BI Source For this example we will use LogicMonitor's "Get Devices"API method to build a simple inventory report. Documentation for the "Get Devices"method and its options are available at: https://www.logicmonitor.com/support/rest-api-developers-guide/v1/devices/get-devices 1. Launch Microsoft Power BI Desktop. 2.Click the Get Data button, either on the intro dialog or on the toolbar ribbon. 3. On the Get Data dialog, search for the “Web” data type that’s located under the “Other” section. Once “Web” is selected click the Connect button. 4.Enter the URL of the REST method, including optional query parameters. For the example using the "Get Devices"method, the URL used was the following (replace "[portalname]" to match your own LogicMonitor portal's URL) : https://[portalname].logicmonitor.com/santaba/rest/device/devices?size=1000&fields=alertStatus,autoProperties,displayName,description,id,link,hostStatus,name,systemProperties,upTimeInSeconds This example URL calls the "Get Devices" method (/device/devices) and passes optional parameters specifying to return up to 1,000 records and lists some properties/fields we want for each device. Please refer to the “Get Devices” method’s documentationfor more information about the available parameters and options. For instance, if your query has more than 1,000 results available (the maximum results available in a single REST call) then you may have to code a loop in Power BI to make multiple calls that paginate through the available results. 5.Power BI will then try to access that URL. After a moment it will ask how to authenticate with the REST service. For this example we’ll use the Basic authentication method. Enter a valid LogicMonitor username and password that Power BI will use to access your portal’s web services and click the Connect button. (NOTE: as mentioned in LogicMonitor’s REST documentation, the option for “Basic” authentication may be removed at some point in the future.) 6.Power BI will then authenticate with LogicMonitor's REST service. After a moment you'll see the initial results from your REST query. Click the "Record" link on the result's 'data' row. 7. Next, click the "List" link on the 'items' row to expand the list of records. 8.Click the To Table button. 9.Keep the default conversion options and click OK. 10.Click the small icon in the column header to expand the results. 11.Click OK on the column selection dialog. 12.Click the Close & Apply button to apply the changes from the query builder. You've now added the REST method as a dynamic data source in Power BI. At this point you can design the report to suit your specific needs. If you want to browse and manipulate the data that was brought into the model, click the Data button (looks like a small grid) on the left-hand toolbar.Kevin_Ford5 years agoEmployee1.4KViews20likes5CommentsSNMPv3 Password Character Set Restrictions?
I’m working on adding a hundred or so Long Strong SNMPv3 passwords into a class of device we’re going to start monitoring. I’m can walk the snmp locally, from a linux neighbor, but not from LM. I’m getting a password error. I assume the issue is that the password is being encoded for storage/delivery. Has anyone else experienced this? If my assumption is correct, what is the restricted character set when pairing LM with Linux SNMP? LM ticket #424608 for internal reference.Solved1.4KViews14likes8CommentsNo Data and Error not seen before
Hello, I have an issue where on a VM where a collector is installed it is returning No Data for items such as System Uptime and CPU but returning data for Memory and Processes. When I do a Poll Now on the System Uptime it Times Out. If I do Poll Now on CPU I get NaN and this message: "java.io.IOException: winproxy return status=0x00000261 errmsg=Process the request timeout" I tried to re-install the collector but still getting the same error. I also restarted the LogicMonitor Agent and Watchdog Services along with the WMI service. Any suggestions on how to troubleshoot and resolve? Any help is much appreciated! Thank you.joshlowit14 years agoNeophyte1.3KViews10likes8CommentsDatasource to monitor Windows Services/Processes automatically?
Hello, We recently cloned 2 Logic Monitor out of the box datasources (name ->WinService- & WinProcessStats-) in order to enable the 'Active Discovery' feature on those. We did this becausewe've the need to discover services/processesautomatically, since we don't have an 'exact list' of which services/processes we should monitor (due to the amount of clients [+100] & the different services/solutions across them) After enabling this it works fine & does what we expect (discovers all the services/processes running in each box),we further added some filters in the active discovery for the servicesin order to exclude common 'noisy' services & grab only the ones set to automatically start with the system. Our problem arrives when these 2specific datasourcestartto impact the collector performance (due to the huge amount of wmi.queries), it starts to reflect on a huge consumption of CPU(putting thaton almost 100% usage all the time) & that further leads to the decrease of the collector performance & data collection (resulting in request timeouts & full WMI queues). We also thought on creating 2 datasources(services/processes) for each client (with filters to grab critical/wanted processes/services for the client in question) but that's a nightmare(specially when you've clients installing applications without any notice & expecting us to automatically grab & monitor those). Example of 1 of our scenarios (1of our clients): - Collector is a Windows VM (VMWare)&has 8GB of RAM with4 allocated virtual processors (host processor is a Intel Xeon E5-2698v3 @ 2.30Ghz) - Currently, it monitors 78 Windows servers (not including the collector) & those 2datasourceare creating 12 700 instances (4513 - services | 8187 - processes) - examples below This results in approx. 15 requests per second This results in approx. 45 requests per second According to the collector capacity document (ref. Medium Collector) we are below the limits (forWMI), however, those 2 datasourceare contributing A LOT to make the queues full. We're finding errors in a regular basis- example below To sum thisup, we were seeking for another 'way' of doing the same thing without consuming so much resources on the collector end (due to the amount of simultaneousWMI queries). Not sure if that's possible though. Did anyone had this need in the past & was able to come up with a differentsolution (not so resource exhaustive)? We're struggling here mainly because we come from a non-agent less solution (which didn't facedthis problem due to the individual agentdistributed load - per device). Appreciate the help in advance! Thanks,1.2KViews13likes37CommentsLogicMonitor Portal Security
These articles: https://techcrunch.com/2023/08/31/logicmonitor-customers-hit-by-hackers-because-of-default-passwords/?guccounter=1 https://www.bleepingcomputer.com/news/security/logicmonitor-customers-hacked-in-reported-ransomware-attacks/ ...indicate that some LogicMonitor accounts may have had weak default passwords applied and become compromised. Until we have an official word from LogicMonitor, may I suggest that all LogicMonitor administrators: Delete or suspend any users that should not be in your system Ensurethat no “out of the box” accounts are Active (including the lmsupport account) You should set this account to “Suspended” until we have word that this account is not affected Note that unless this account is Active, LogicMonitor Support cannot access your portal Enable2FA for ALL users I mean,you did that already, right? RIGHT? IMPORTANT: You need to do this for administrator users,even ifyou have SSO Ensure that any user that has not logged in recently (say for 60 days) is either deleted or set to Suspended IMPORTANT:Revoke administrator/manager rights from anyone that does not absolutely need them The recommendation is 2 users per LogicMonitor portal If you don’t recognise a user, seriously consider setting it to Suspended Be cautious of System Integration accounts - you may disrupt these if you are not careful If a system has access, ensure that this via an API user, not an Access Token on a named person. I will update this post with other suggestions as they are made.SolvedDavid_Bond2 years agoProfessor1.1KViews19likes8CommentsSimple Check for SSL Cert Expiration Monitoring
Monitoring SSL Certificate expiry days can be done in LogicMonitor by making use of datasourceSSLCerts- (SSL Certificate Expiration). On the side note, SSL Certificate is used for certifying a web server that does the secured socket layer data encryption between a web server and a client (web browser). SSL Certificate is issued by several organizations/companies so called Certificate Authority (CA) for the purpose of providingthelegitimacy of the web servers that encrypt the data for communication. The certificates issued will be digitally-signed by those CA and can be trusted by the client based on Root Certificates installed in the common browsers. It is, however, possible to create a self-signed certificate, which in this case is used for a testing purpose. Data will still be encrypted but the certificate will not be trusted by the client browsers. When a device with SSL Cert installed has been added to LogicMonitor, rightfully that datasource will be auto-applied, as with other normal datasources, and after some collection cycles, the data of the certificate remaining days to expire should appear. Under the circumstances whereby the monitoring does not work as per normal, common recommendation will beto go through the following simple procedures: 1) Device check, whether or not the SSL Certification has been configured properly 2) Accessibility from collector 3) Data collection test from collector 1) For a start is to check if the SSL certificate configuration is properly done in the web server - Each web server may have a different way of setting up the certificate, the following is an example forNGINX & IIS: ssl_certificate "/etc/cert/nginx/private/[cert name].crt"; ssl_certificate_key "/etc/cert/nginx/private/[cert name].key"; - An open port checkwould be goodas well with below output from the check (note: port is bound to any interfaces or possibly only one interface on the web server): Linux: tcp000.0.0.0:4430.0.0.0:* LISTEN Windows: TCP0.0.0.0:4430.0.0.0:0LISTENING 2) The next check will be to access the web server from the collector (obviously the collector must be able to reach to the device where the web server is installed): Note: Collector debug window is needed for this check, please refer to this article:https://www.logicmonitor.com/support/settings/collectors/using-the-collector-debug-facility/ - the main command is simply: !http (help !http will give info for the command itself) $ !http https://10.13.13.9 HTTP response received at at: 2017-03-26 16:28:55.581. Time elapsed: 20ms HTTP/1.1 200 OK Server: nginx/1.10.2 Date: Sun, 26 Mar 2017 08:28:55 GMT Content-Type: text/html Content-Length: 5948 Last-Modified: Wed, 04 Jan 2017 08:44:56 GMT Connection: keep-alive ETag: "586cb608-173c" Accept-Ranges: bytes It showsthat the web server is accessible at port 443 (HTTPS) with response code 200 as follows: 3) The last one will be to check if data can be collected from the collector which is the remaining days to the expiry of the certificate. Collector debug window is still needed for this check. For Linux collector: $ !java -cp ../lib/certexpire.jar CertificateExpire /usr/local/logicmonitor/agent10.13.13.9 10.13.13.9 443true Enable debug SSL cert Get the support protocol, protocols=SSLv2Hello,SSLv3,TLSv1,TLSv1.1,TLSv1.2, Get the enabled protocol, protocols=TLSv1,TLSv1.1,TLSv1.2, Try to send request to server. Request send ... TrustManager: checkServerTrusted got1certs. Auth type: ECDHE_RSA Exception caught - java.security.cert.CertificateException: Certificate received. Certification1[Type: X.509] Issue Date: Mon Jan0217:51:51SGT2017, Expiration Date: Sat Jul0117:51:51SGT2017 Got issue date - Mon Jan0217:51:51SGT2017, expiration date - Sat Jul0117:51:51SGT2017 97 For Windows collector: $ !java -cp ../lib/certexpire.jar CertificateExpire "C:\Program Files (x86)\LogicMonitor\Agent" fspk.lmsupport.com fspk.lmsupport.com 443 true Enable debug SSL cert Get the support protocol, protocols=SSLv2Hello,SSLv3,TLSv1,TLSv1.1,TLSv1.2, Get the enabled protocol, protocols=TLSv1,TLSv1.1,TLSv1.2, Try to send request to server. Request send ... TrustManager: checkServerTrusted got 1 certs. Auth type: DHE_RSA Exception caught - java.security.cert.CertificateException: Certificate received. Certification 1 [Type: X.509] Issue Date: Thu Feb 02 03:16:57 PST 2017, Expiration Date: Sat Feb 02 03:16:57 PST 2019 Got issue date - Thu Feb 02 03:16:57 PST 2017, expiration date - Sat Feb 02 03:16:57 PST 2019 660 - The basic command is: !java and complete format would be: !java -cp ../lib/certexpire.jar CertificateExpire [collector installation folder][device name/IP address] [device name/IP address] 443 true Note: * certexpire.jar is in the library of the collector agent * device name/IP address is the web server that is registered/added into the LogicMonitor portal * collector folder is: either "C:\Program Files (x86)\LogicMonitor\Agent" or /usr/local/logicmonitor/agent The data collected can be verified on the device where the SSL Certificate is installed by accessing the web server in the browser and view the detail of the certificate loaded in the browser as follows: Having gone through all the above-mentioned checks and the results are good, it will produce this monitoring in LogicMonitor as follows:Purnadi_K8 years agoFormer Employee1.1KViews3likes12CommentsUsing LogicMonitor's REST API as a Power BI Source - Part 2
Overview Back in 2020 I shared an article on how to use any of LogicMonitor's REST API methods as a datasource in Power BI Desktop for reporting. That opened a good deal of potential but also had some limitations; in particular, it relied on use of basic authentication that will eventually be deprecated, and it could only make a single API call so could only fetch up to 1,000 records. I've documented how to use a LogicMonitor bearer token instead of basic authentication, but bearer tokens aren't currently available in every portal (just our APM customers for now) and it still faces the single call limitation. In lieu of a formal Power BI data connector for LogicMonitor being available yet, there is another option available that is more secure and a good deal more flexible: using Microsoft Power BI Desktop's native support for Python! Folks familiar with LogicMonitor's APIs know there is a wealth of example Python scripts for many of our REST methods (example). These scripts not only allow us to leverage accepted methods of authentication but also allow combining calls and tweaking results in various ways that can be very useful for reporting. Best of all, using these scripts inside Power BI isn't difficult and can even be used for templated reports (I'll include some working examples at the end of this article). While these instructions focus on Power BI Desktop, reports leveraging Python can also be published to the Power BI service (Microsoft article for reference). Prerequisites Power BI Desktop. You can install this via Microsoft's Store app or from the following link:https://www.microsoft.com/en-us/download/details.aspx?id=58494 The latest version of Python installed on the same system as Power BI Desktop. This can also be installed via Microsoft's Store app or from:https://www.python.org/downloads/windows/ Some basic familiarity with LogicMonitor’s REST APIs, though I'll provide working examples here to get you started. The full API reference can be found at: https://www.logicmonitor.com/support/rest-api-developers-guide/overview/using-logicmonitors-rest-api First, install Power BI Desktop and Python, then configure each according to this Microsoft article: https://learn.microsoft.com/en-us/power-bi/connect-data/desktop-python-scripts. When adding the Python modules, you'll also want to add the 'requests' module by running: pip install requests Modifying Python Scripts for Use in Power BI As mentioned in the Microsoft articleabove, Power BI will expect Python scripts to use the Pandas module for output. If you're not familiar with Pandas, it's used extensively by the data science community for analyzing bulk data. Adding Pandas to one of the example scripts can be as easy as adding 'import pandas as pd' to the list of import statements at top of the script, then converting the JSON returned by LogicMonitor's API to a Pandas dataframe. For example, if the script captured the API results (as JSON) in a variable called "allDevices", we can convert that to a Pandas dataframe as simply as something like:pandasDevices = pd.json_normalize(allDevices) In that example, "pd" is the name we gave to the Pandas modules back in the "import" statement we added, and "json_normalize(allDevices)" tells Pandas to take our JSON and convert it to a Pandas dataframe. We can then simply print that variable as our output for Power BI Desktop to use as reporting data. Below is a full Python script that fetches all the devices from your portal and prints it as a Pandas dataframe. This is just a minor variation of an example given in our support documentation. You'd just need to enter your own LogicMonitor API ID, key, and portal name in the variables near the top. #!/bin/env python import requests import json import hashlib import base64 import time import hmac # Pandas is required for PowerBI integration... import pandas as pd # Account Info... # Your LogicMonitor portal's API access ID... AccessId = 'REPLACE_WITH_YOUR_LM_ACCESS_ID' # Your LogicMonitor portal's API access Key... AccessKey = 'REPLACE_WITH_YOUR_LM_ACCESS_KEY' # Your LogicMonitor portal. Example: if you access your portal at https://xyz.logicmonitor.com, then your portal name is "xyz"... Company = 'REPLACE_WITH_YOUR_LM_PORTAL_NAME' # Create list to keep devices... allDevices = [] # Loop through getting all devices... count = 0 done = 0 while done==0: # Request Info... httpVerb ='GET' resourcePath = '/device/devices' data='' # The following query filters for just standard on-prem resources (deviceType=0), so adjust to suite your needs... queryParams ='?v=3&offset='+str(count)+'&size=1000&fields=alertStatus,displayName,description,deviceType,id,link,hostStatus,name&filter=deviceType:0' # Construct URL... url = 'https://'+ Company +'.logicmonitor.com/santaba/rest' + resourcePath + queryParams # Get current time in milliseconds... epoch = str(int(time.time() * 1000)) # Concatenate Request details... requestVars = httpVerb + epoch + data + resourcePath # Construct signature... hmac1 = hmac.new(AccessKey.encode(),msg=requestVars.encode(),digestmod=hashlib.sha256).hexdigest() signature = base64.b64encode(hmac1.encode()) # Construct headers... auth = 'LMv1 ' + AccessId + ':' + signature.decode() + ':' + epoch headers = {'Content-Type':'application/json','Authorization':auth} # Make request... response = requests.get(url, data=data, headers=headers) # Parse response & total devices returned... parsed = json.loads(response.content) total = parsed['total'] devices = parsed['items'] allDevices.append(devices) numDevices = len(devices) count += numDevices if count == total: print ("done") done = 1 else: print ("iterating again") # (for debugging) Print all devices... # print (json.dumps(allDevices, indent=5, sort_keys=True)) # Grab just the data items... items = allDevices[0] # Convert the JSON to a Panda dataframe that PowerBI can consume... resources = pd.json_normalize(items) # Print the dataframe... print (resources) If you run that Python script directly, you'll see it prints in a columnar format instead of the raw JSON returned by the API. It's a good example of how Pandas converted the raw data into a more formal data structure for manipulation & analysis. Power BI Desktop leverages that data directly for ingestion into its own reporting engine, which is a pretty powerful combination. Now let's show how to put it to use! How to create a Power BI report that pulls data directly from LogicMonitor via Python In Power BI Desktop, click the Get Databutton. This can be to start a new report or to add to an existing report. Choose to get data from an "Other" source, choose "Python script", then click the Connect button. Paste in your complete and working Python script, then click OK. (some examples are attached to the bottom of this article) Power BI will run the script. Depending on the amount of data being retrieved this can take anywhere from a few seconds to a few minutes. When it's complete, you'll see the Navigator pane with the name of the Python Pandas dataframe from the script output. Check the box next to the item to see the data preview. If the sample looks good then click theLoadbutton. After Power BI has loaded the data you'll be presented with the report designer, ready for you to create your report. To see the full preview of the data from your portal, click theDataicon to the left of the report workspace. When you need to pull the latest data from LogicMonitor, just click theRefreshbutton in the toolbar. To Convert a Report to a Parameterized Template If you've created a Python-based report and want to save it as a re-useable, parameterized template, we'll first need to add our necessary parameters and enable Power BI to pass those values to the script. With the report active that we want to turn into a template, click theModelicon to the left of the workspace. From there, click the three dots in the upper-right corner of the table generated by the Python script and chooseEdit Query. That will open the Power Query Editor. From here clickManage Parameterson the toolbar. For our example we'll add three new parameters, which we'll call "AlertID", "AlertKey" & "PortalName" (feel free to label them however you choose). For each, enter the respective criterion used for accessing your LogicMonitor API in theCurrent Valuefield. Below's an example of what it would look like when completed. When done click theOKbutton to close the dialog. Next, click the table name in theQuerieslist ("alerts" in our example screenshot) and click theAdvanced Editoroption in the toolbar. You'll see Power BI’sM languagequery for our datasource, including the Python script embedded in it. We're going to edit the script to replace the hard-coded API parameters to use thePower BI parameters wedefined instead. Replace the values of the script'sAccessId,AccessKey, andCompanyvariables with the following, respectively (including the quotes): " & AccessID & " " & AccessKey & " " & PortalName & " Note that those will be inside the single quotes for each of the variables. Refer to the screenshot below for an example of how it would look (the changes have been highlighted). When ready click theDonebutton. ClickClose & Applyon the Power Query Editor to commit our changes. If all looks good, now let's save this as a Power BI template. Click theFilemenu, then chooseSave As. Change theSave As Typeto "Power BI template files (*.pbit)", provide a filename and clickSave. Power BI will prompt to provide a description for your template. Your report is now ready for sharing! When you open the template file, you'll be prompted to enter values for the parameters we configured in steps 2 & 3. Enter those, hit theLoadbutton, and you'll then be presented with the report designer ready to go. Example Files Here are some example Python scripts modified to use Pandas to help get you started: get_alerts.powerbi.py get_devices_powerbi.py Here are some basic, ready-to-use Power BI report templates based on the above scripts: Power BI template for reporting on Alerts Power BI template for reporting on Resources/Devices1.1KViews8likes2Comments