New SilverPeak_Orchestrator Datasource - How to Add Alarm List?
I just added the SilverPeak_Orchestrator Datasources in our portal, but we are more interested on showing the Active Alarms. Good thing I have a starting point using the existing Datasources. What is the best Logic Module to use if I want to show under the Datasources the Active Alarms? I can get the list of Active Alarms via API. Alarm has an ID I can use to correlate the clears. Any Ideas or is there a existing Logic Module that sort of do the same thing?. { "id": 29611, "applianceId": "42.NE", "severity": "CRITICAL", "sequenceId": -1, "source": "/orchestrator/connectivity", "acknowledged": false, "clearable": true, "timeOccurredInMills": 1644502937000, "description": "Orchestrator cannot reach this appliance", "type": "SW", "recommendedAction": "", "serviceAffect": true, "typeId": 6815748, "name": "UNREACHABLE_APPLIANCE", "occurrenceCount": 1, "hostName": "test", "closed": false }32Views0likes2CommentsSharing Dashboard views
I was wondering if this would be a good thread to start to share some Dashboard widget views, I am in the process of implementing logic monitor and would be great to have a view of what kind of dashboards are being used by different teams like Networks, Server , Storage, Security etc. Just a screen shot describing the view would be awesome. I have added some I have created on our implementation. Thanks in advance. Medi131Views0likes25Comments- 79Views0likes16Comments
Linux HTTPD index.html Change Monitoring
Active Directory servers have the anyChange Datapoint, for when a change is made to the AD config file, but I want something like that to tell me when the contents of /var/www/html/index.html changes. Does anyone know the name of such a thing in LM Exchange, or how I can otherwise accomplish this? TIA, Gordon21Views0likes2CommentsVMware Velo Cloud - Refined modules (to permit usage of API Token instead of user/pass)
Hello, Due to recent requirements imposed by a customer of ours, we've refined the Velo Cloud module suite. We've tweaked those to allow the usage of a generated API Token only (using the property ->velo.apitoken.key) & disregard the use of velo.user/velo.pass. The reason behind this is the customer not wanting to share any creds to their infra (even if read-only), since he didn't wanted us to have GUI access. The moduleswill collect the same exact metrics as the OOTB ones, only difference will be in the actual authentication. In addition to the original suite, we've also created the addCategory_VeloCloudAPI_TokenStatusproperty source. This will allow the token to be renewed & having the resource updated on LM automatically (without any interaction from the end-user). This is a required automation since VMware only allows to create tokens that are valid for up to 12 months & we don't want to miss/forget to renewing it. Since it wouldcause monitoring to fail. With that in mind we've came up with this PS. However, to use this PS the velo.user needs to mapped (only the user) & it has to have more than read-only permissions too (required privileges are mentioned in their swagger pagefor the different API calls). PS would require the following privilegies assigned to the user - CREATE - ENTERPRISE TOKEN - READ - UPDATE The remaining modules work only with thevelo.apitoken.key mapped, this is just if you want to renew the token in a automated way & make use of that extra PS (we've coded it to renew the token if <10 days for it to expire). Despite those still being in security review - here you go Datasource(s) VMware_VeloCloud_EdgeLinkMetrics - MTWGY4 VMware_VeloCloud_EdgeLinkEventQuality - J7WKEF VMware_VeloCloud_EdgeLinkHealth - AWTYNE VMware_VeloCloud_EdgeHealth - 4DPLLC Property Source(s) addCategory_VeloCloudAPI - RTCDA2 addERI_VMware_VeloCloud - ECDHXG addCategory_VeloCloudAPI_TokenStatus - EER7JK Topology Source(s) VMware_VeloCloud_Topology - CZ7AKJ Hope this helps in case anyone has the same needs. Thank you!22Views2likes1CommentZerto JobStats
Good Morning All, For anyone looking to monitor Zerto replication jobs via LogicMonitor we have this PowerShell based batchscript datasource. I hope you find it useful;6477DR from the LM Exchange. These are the collected data points: ActualRPO IOPS LogicMonitorScriptTime Priority ProtectedEntities ProvisionedStorageInMB RecoveryEntities SourceEntities Status SubStatus TargetEntities UsedStorageInMB These are the calculated data points based upon those collected: ProvisionedStorageInGB UsedStorageInGB Please respond with any questions. Respectfully, Alejandro Esmael172Views0likes7CommentsOffice365 Monitoring
Hi everybody! I've been using Mike Sudings monitoring solution for a while and i've expanded it a bit to monitor more solutions in Office365. TheMonitors included Custom Domains (quantity)WHEKJJ Deleted Users(quantity)ZHADY9 Global Admins(quantity)7GGZWZ Licenses Assignable ((quantity) based on type)R46EGX Licenses Assigned(quantity) based on typeGHRNLL Licensed / Unlicesned Users(quantity)4PJZJ4 MFA Users(quantity of enabled/disabled users)WZFAWK Users and devices in Office365 Tenant(quantity) Devices if clients are joined to Azure ADPLMP22 Hope they are helping the community out150Views1like19CommentsSupport for Veeam 11 PowerShell Module
Veeam 11 released with a PowerShell Module rather than a PS Snap-In. Is anyone working to update the Veeam LogicModules? https://www.veeam.com/veeam_backup_11_0_whats_new_wn.pdf Quote • PowerShell module — By popular demand, we switched from the PowerShell snap-in to the PowerShell module, which can be used on any machine with the backup console installed. We also no longer require PowerShell 2.0 installed on the backup server, which is something many customers had problems with. • New PowerShell cmdlet — V11 adds 184 new cmdlets for both newly added functionality and expanded coverage of the existing features with a particular focus on restore functionality156Views0likes4CommentsMTR path monitoring
I have not published this given the security implications running something like this... Would love some help to try reduce the risks Provided the collector is running Linux (CentOS has only been tested) and the host has mtr installed this will monitorthe path out to your LogicMonitor account along with the Latency, the path is stored as a Instance Level property on the device look at the updateState code... Every time this runs it compares the path and the previous path from the instance level property. If the path changes it updates another instance property named alert_message with the old vs new path so you can put it in your favourite tool to see the diff, if you set the datapoints to alert you will get both old and new path's in the alert messages you receive. You can also map the alerts to spikes in latency ect. If anyone has a nicer way of managing all this please do!!! <?xml version="1.0" encoding="UTF-8" ?> <feed version="1.0" hasPendingRequests="false" > <company></company> <status>200</status> <errmsg>OK</errmsg> <interval>0</interval> <entry type="predatasource"> <version>1631754840</version> <name>MTR to LogicMonitor</name> <displayedas>MTR to LogicMonitor</displayedas> <description>This Datasource will only work on Linux currently tested with CentOS 8 only, if you want to add this to a collecotr you will need to add a property to the instance mtr.monitoring</description> <collector>script</collector> <hasMultiInstances>false</hasMultiInstances> <schedule>180</schedule> <appliesTo>isCollectorDevice() && getPropValue("mtr.monitoring")</appliesTo> <wildcardauto>false</wildcardauto> <wildcardpersist>false</wildcardpersist> <wildcardlinuxscript></wildcardlinuxscript> <wildcardlinuxcmdline></wildcardlinuxcmdline> <wildcardwinscript></wildcardwinscript> <wildcardwincmdline></wildcardwincmdline> <wildcardgroovyscript></wildcardgroovyscript> <wildcardschedule>1440</wildcardschedule> <wildcarddisable>false</wildcarddisable> <wildcarddeleteinactive>false</wildcarddeleteinactive> <agdmethod>none</agdmethod> <agdparams></agdparams> <group>Custom Monitoring</group> <tags></tags> <technology></technology> <adlist><![CDATA[{"agdmethod":"none","agdparams":"","id":0,"filters":[],"params":{}}]]></adlist> <schemaVersion>2</schemaVersion> <dataSourceType>1</dataSourceType> <attributes> <attribute> <name>scripttype</name> <value>embed</value> <comment></comment> </attribute> <attribute> <name>scriptgroovy</name> <value>import com.santaba.agent.groovyapi.expect.Expect; import com.santaba.agent.groovyapi.snmp.Snmp; import com.santaba.agent.groovyapi.http.*; import com.santaba.agent.groovyapi.jmx.*; import org.xbill.DNS.*; import java.util.regex.Matcher; import java.util.regex.Pattern; import groovy.json.* import org.apache.commons.codec.binary.Hex import javax.crypto.Mac import javax.crypto.spec.SecretKeySpec import com.santaba.agent.util.Settings import java.security.MessageDigest import com.santaba.agent.live.LiveHostSet import org.apache.http.client.methods.* import org.apache.http.entity.ContentType import org.apache.http.entity.StringEntity import org.apache.http.impl.client.CloseableHttpClient import org.apache.http.impl.client.HttpClients import org.apache.http.client.config.RequestConfig import org.apache.http.HttpHost import org.apache.http.util.EntityUtils import java.net.URLEncoder proxy_host = hostProps.get("proxy.host")?hostProps.get("proxy.host"):null proxy_port = hostProps.get("proxy.port")?hostProps.get("proxy.port"):null String apiId = hostProps.get("lmaccess.id")?:hostProps.get("apiaccessid.key") String apiKey = hostProps.get("lmaccess.key")?:hostProps.get("apiaccesskey.key") def portalName = hostProps.get("lmaccount")?:Settings.getSetting(Settings.AGENT_COMPANY) if (proxy_host) { System.getProperties().put("proxySet", "true"); System.getProperties().put("proxyHost", proxy_host); System.getProperties().put("proxyPort", proxy_port); } def currentPath = instanceProps.get('mtr.path') def address = "${portalName}.logicmonitor.com"; proc = ('mtr --csv ' + address).execute().text; ArrayList<String> path = new ArrayList<String>(); if(proc) { i = 0 proc.eachLine { try { hops = it.split(',') if(i != 0) { path.add(hops[5]); i += 1; } else { i += 1; } } catch(e) { hopSteps += 'Error: ' + e + '\n'; } } fullPath = path.join(',') if(fullPath != currentPath) { updateState(fullPath, instanceProps.get("system.instanceid"), hostProps.get("system.deviceId"), instanceProps.get("instance"), portalName, apiId, apiKey, 'mtr.path') updateState("Original Path:${currentPath} New Path: ${fullPath}", instanceProps.get("system.instanceid"), hostProps.get("system.deviceId"), instanceProps.get("instance"), portalName, apiId, apiKey, 'alert_message') println "path_changed=1" } else { println "path_changed=0" } icmp_time = ("ping ${address} -c1").execute().text =~ /time=(\d+)/; println "icmp_time=${icmp_time[0][1]}" } else { println 'UNKNOWN issue'; } return 0; // internal stuff def updateState(message, instanceId, deviceId, dataSourceName, portalName, apiId, apiKey, field) { datasourceId = getDataSourceId(deviceId, dataSourceName, portalName, apiId, apiKey) instance_data = "{\"customProperties\":[{\"name\":\"mtr.path\",\"value\":\"${message}\"}]}" old_data = apiGetV2(portalName, apiId, apiKey, "/device/devices/${deviceId}/devicedatasources/${datasourceId['ds_id']}/instances/${datasourceId['instance_id']}") old_data['description'] = message old_data['customProperties'] << [name: field, value: message] json_data = groovy.json.JsonOutput.toJson(old_data) response = rawPutV2("PUT",apiId,apiKey,portalName,"/device/devices/${deviceId}/devicedatasources/${datasourceId['ds_id']}/instances/${datasourceId['instance_id']}","",json_data) } def getDataSourceId(deviceId, dataSourceName, portalName,apiId,apiKey) { //pp(do_request('GET', '/setting/datasources?filter=name:"Crestone custom test"')) def args = [filter:"dataSourceName:\"${dataSourceName}\""] data = apiGetV2(portalName, apiId, apiKey, "/device/devices/${deviceId}/devicedatasources", args) instance_data = apiGetV2(portalName, apiId, apiKey, "/device/devices/${deviceId}/devicedatasources/${data['items'][0]['id']}/instances") mapping_data = [instance_id: instance_data['items'][0]['id'], ds_id: instance_data['items'][0]['deviceDataSourceId']] return mapping_data } def apiGetV2(portalName, apiId, apiKey, endPoint, Map args=[:]) { def request = rawGetV2(portalName, apiId, apiKey, endPoint, args) if (request.getResponseCode() == 200) { def payload = new JsonSlurper().parseText(request.content.text) return payload } else { throw new Exception("Server return HTTP code ${request.getResponseCode()}") } } def rawGetV2(portalName, apiId, apiKey, endPoint, Map args=[:]) { def auth = generateAuth('GET', apiId, apiKey, endPoint) def headers = ["Authorization": auth, "Content-Type": "application/json", "X-Version":"2", "External-User":"true"] def url = "https://${portalName}.logicmonitor.com/santaba/rest${endPoint}" if (args) { def encodedArgs = [] args.each{ k,v -> encodedArgs << "${k}=${java.net.URLEncoder.encode(v.toString(), "UTF-8")}" } url += "?${encodedArgs.join('&')}" } def request = url.toURL().openConnection() headers.each{ k,v -> request.addRequestProperty(k, v) } return request } def rawPutV2(_verb, _accessId, _accessKey, _account, _resourcePath, _queryParameters, _data){ responseDict = [:] url = 'https://' + _account + '.logicmonitor.com' + '/santaba/rest' + _resourcePath + _queryParameters StringEntity entity = new StringEntity(_data) epoch = System.currentTimeMillis() requestVars = _verb + epoch + _data + _resourcePath hmac = Mac.getInstance('HmacSHA256') secret = new SecretKeySpec(_accessKey.getBytes(), 'HmacSHA256') hmac.init(secret) hmac_signed = Hex.encodeHexString(hmac.doFinal(requestVars.getBytes())) signature = hmac_signed.bytes.encodeBase64() CloseableHttpClient httpclient = HttpClients.createDefault() http_request = new HttpPut(url) if (proxy_host) { RequestConfig requestConfig = RequestConfig.custom() .setProxy(new HttpHost(proxy_host, new Integer(proxy_port))) .build(); http_request.setConfig(requestConfig); } http_request.addHeader("Authorization", "LMv1 " + _accessId + ":" + signature + ":" + epoch) http_request.setHeader("X-Version", "2") http_request.setHeader("Accept", "application/json") http_request.setHeader("Content-type", "application/json") http_request.setEntity(entity) response = httpclient.execute(http_request) responseBody = EntityUtils.toString(response.getEntity()) code = response.getStatusLine().getStatusCode() responseDict['code'] = code ?: null responseDict['body'] = responseBody ?: null return responseDict } static String generateAuth(method,id, key, path) { Long epoch_time = System.currentTimeMillis() Mac hmac = Mac.getInstance("HmacSHA256") hmac.init(new SecretKeySpec(key.getBytes(), "HmacSHA256")) def signature = Hex.encodeHexString(hmac.doFinal("${method}${epoch_time}${path}".getBytes())).bytes.encodeBase64() return "LMv1 ${id}:${signature}:${epoch_time}" }</value> <comment></comment> </attribute> <attribute> <name>windowsscript</name> <value></value> <comment></comment> </attribute> <attribute> <name>linuxscript</name> <value></value> <comment></comment> </attribute> <attribute> <name>windowscmdline</name> <value></value> <comment></comment> </attribute> <attribute> <name>linuxcmdline</name> <value></value> <comment></comment> </attribute> <attribute> <name>properties</name> <value></value> <comment></comment> </attribute> </attributes> <datapoints> <datapoint> <name>path_changed</name> <dataType>7</dataType> <type>2</type> <postprocessormethod>namevalue</postprocessormethod> <postprocessorparam>path_changed</postprocessorparam> <usevalue>output</usevalue> <alertexpr></alertexpr> <alertmissing>1</alertmissing> <alertsubject>MTR path changes to your LogicMonitor account</alertsubject> <alertbody>MTR path has changed: ##alert_message##</alertbody> <enableanomalyalertsuppression></enableanomalyalertsuppression> <adadvsettingenabled>false</adadvsettingenabled> <warnadadvsetting></warnadadvsetting> <erroradadvsetting></erroradadvsetting> <criticaladadvsetting></criticaladadvsetting> <description></description> <maxvalue></maxvalue> <minvalue></minvalue> <userparam1></userparam1> <userparam2></userparam2> <userparam3></userparam3> <iscomposite>false</iscomposite> <rpn></rpn> <alertTransitionIval>0</alertTransitionIval> <alertClearTransitionIval>0</alertClearTransitionIval> </datapoint> <datapoint> <name>icmp_time</name> <dataType>7</dataType> <type>2</type> <postprocessormethod>namevalue</postprocessormethod> <postprocessorparam>icmp_time</postprocessorparam> <usevalue>output</usevalue> <alertexpr></alertexpr> <alertmissing>1</alertmissing> <alertsubject></alertsubject> <alertbody></alertbody> <enableanomalyalertsuppression></enableanomalyalertsuppression> <adadvsettingenabled>false</adadvsettingenabled> <warnadadvsetting></warnadadvsetting> <erroradadvsetting></erroradadvsetting> <criticaladadvsetting></criticaladadvsetting> <description></description> <maxvalue></maxvalue> <minvalue></minvalue> <userparam1></userparam1> <userparam2></userparam2> <userparam3></userparam3> <iscomposite>false</iscomposite> <rpn></rpn> <alertTransitionIval>0</alertTransitionIval> <alertClearTransitionIval>0</alertClearTransitionIval> </datapoint> </datapoints> <graphs> <graph> <name>Latency to LogicMonitor</name> <title>Latency to LogicMonitor</title> <verticallabel>ms</verticallabel> <rigid>false</rigid> <maxvalue>NaN</maxvalue> <minvalue>NaN</minvalue> <displayprio>1</displayprio> <timescale>1day</timescale> <base1024>false</base1024> <graphdatapoints> <graphdatapoint> <name>ICMPms</name> <datapointname>icmp_time</datapointname> <cf>1</cf> </graphdatapoint> </graphdatapoints> <graphvirtualdatapoints> </graphvirtualdatapoints> <graphdatas> <graphdata> <type>1</type> <legend>ICMPms</legend> <color>silver</color> <datapointname>ICMPms</datapointname> <isvirtualdatapoint>false</isvirtualdatapoint> </graphdata> </graphdatas> </graph> </graphs> <overviewgraphs> </overviewgraphs> <scripts> </scripts> </entry> </feed>65Views0likes0CommentsJuniper SRX Screens - Locator ID A3X9GD
Are you using Juniper SRX devices for security enforcement? Do you have Screens configured? If "yes" to the first, but "no" to the second I'd advise you to read this think about it for a little while proceed cautiously with an implementation? return here/LM Exchange to import this datasource (once it passes LM review) to get visibility into the signature-based screens you put in place If "yes" to both import this datasource(once it passes LM review) to get visibility into the signature-based screens you put in place NOTES: Locator IDA3X9GD this datasource only captures signature-based screens. If you want the stats-based stuff, you are on your own? it doesn't indicate whether you chose to forward or drop matching traffic OOB it has no alert thresholds s/o to @Stuart Weenigfor kick-starting me on this back in the late spring.....see what happens when you attend Office Hours and ask questions ?12Views0likes1CommentOracle DB names via SNMP
Given the existing modules use SSH/WMI to discover the DB names we required a way without the need to SSHinto a server we have published a Property Source4TK9LN that utilises SNMP to pull the DB names and then populate auto.oracle_dbs, you will either need to update / fork existing Oracle datasources or move this to a DS and set the oracle_dbnames there. Also attached inline. any improvements welcome! { "scheduleOption": 0, "dataType": 0, "description": "List all Oracle databases with SNMP", "appliesTo": "hasCategory(\"OracleDB\")", "technology": "", "type": "propertyrule", "params": [ { "name": "linuxcmdline", "comment": "", "value": "" }, { "name": "linuxscript", "comment": "", "value": "" }, { "name": "scriptgroovy", "comment": "", "value": "// Locate Oracle database names via SNMP, tested on Linux\nimport com.santaba.agent.groovyapi.snmp.Snmp\n\ndef snmp_timeout = 15000\n\ndef g_hostname = hostProps.get(\"system.hostname\")\ndef g_community = hostProps.get(\"snmp.community\")\ndef g_version = hostProps.get(\"snmp.version\")\ndef g_security_name = hostProps.get(\"snmp.security\")\ndef g_auth_proto = hostProps.get(\"snmp.auth\")\ndef g_auth_token = hostProps.get(\"snmp.authtoken\")\ndef g_priv_proto = hostProps.get(\"snmp.priv\")\ndef g_priv_token = hostProps.get(\"snmp.privtoken\")\n\ndef snmp = new lm_snmp( g_hostname, g_community, g_version, g_security_name, g_auth_proto, g_auth_token, g_priv_proto, g_priv_token, snmp_timeout)\n// List all running processes\noutput = snmp.snmpwalk(\"1.3.6.1.2.1.25.4.2.1.4\")\ndef databases = [:]\ndef dblist = []\noutput.eachLine { entry ->\n process = entry.split(\"=\")\n // Find the Oracle process names\n def proc_name = process[1] =~ /oracle(?!PRD)(\\w+)/\n if(proc_name) {\n // Dump it in a Map to keep it unique\n databases[proc_name[0][1]] = 1\n }\n}\n// Loop the Map and pump them back into an array\ndatabases.each{entry -> dblist << entry.key}\ndb_names = dblist.join(\",\")\nprint \"auto.oracle_dbs=${db_names}\"\n\n// Core classes\nclass lm_snmp\n{\n\n String community\n String version // v1, v2c, v3\n String security_name\n String auth_proto // MD5\n String auth_token\n String priv_token\n String priv_proto // DES\n Integer timeout = 30000\n\n Map v3_map = [:]\n\n String hostname\n\n lm_snmp( String hostname, String community, String version = \"v3\", String security_name = null, String auth_proto = null, String auth_token=null, String priv_proto=null, String priv_token=null, Integer timeout = 30000 ) {\n\n this.hostname = hostname\n this.community = community\n this.version = version\n this.timeout = timeout\n\n if ( this.version == \"v3\" )\n {\n this.security_name = security_name\n if ( auth_proto == null ) { this.auth_proto = \"SHA\" } else { this.auth_proto = auth_proto }\n this.auth_token = auth_token\n\n if ( priv_proto == null ) { this.priv_proto = \"AES\" } else { this.priv_proto = priv_proto }\n this.priv_token = priv_token\n\n v3_map[\"snmp.version\"] = \"v3\"\n v3_map[\"snmp.security_name\"] = this.security_name\n v3_map[\"snmp.auth_proto\"] = this.auth_proto\n v3_map[\"snmp.auth_token\"] = this.auth_token\n v3_map[\"snmp.priv_proto\"] = this.priv_proto\n v3_map[\"snmp.priv_token\"] = this.priv_token\n }\n }\n\n def snmpget ( String oid )\n {\n\n println \"Trying: ${oid}\"\n\n if ( this.version == \"v3\" )\n {\n //return Snmp.getV3(this.hostname, this.security_name, this.auth_proto, this.auth_token, this.priv_proto, this.priv_token,oid, this.timeout)\n return Snmp.get(this.hostname, oid, v3_map )\n }\n else\n {\n return Snmp.get(this.hostname, this.community, this.version, oid, this.timeout)\n }\n\n }\n\n def snmpwalk ( String oid )\n {\n\n println \"debug: ${this.hostname}, ${this.security_name}, ${this.auth_proto}, ${this.auth_token}, ${this.priv_proto}, ${this.priv_token}, ${oid}, ${this.timeout.toString()}\"\n\n if ( this.version == \"v3\" )\n {\n //return Snmp.walkV3(this.hostname, this.security_name, this.auth_proto, this.auth_token, this.priv_proto, this.priv_token, oid, this.timeout)\n return Snmp.walk( this.hostname, oid, v3_map)\n }\n else\n {\n return Snmp.walk(this.hostname, this.community, this.version, oid, this.timeout)\n }\n }\n}" }, { "name": "scripttype", "comment": "embed", "value": "embed" }, { "name": "windowscmdline", "comment": "", "value": "" }, { "name": "windowsscript", "comment": "", "value": "" } ], "version": 1628127167, "tags": "", "auditVersion": 0, "name": "List_Oracle_Databases_SNMP", "id": 153, "group": "Oracle" }30Views0likes9CommentsOffice 365 Service Status Checking
Hi All, Thought I would share a couple of new datasources I have written to enhance the Office 365 checks that are currently provided by LM, It appears a lot of people have been asking for Office365 Service Status so I knocked the below together: These will use the existing device properties that you have set for Office 365 and will use powershell to get the service information in 2 ways, the first is a simple top level service status so for the below The next check (Extended) shows the status of the features that make up each of the above services Both of the above use AutoDiscovery to pull in info from any new services Microsoft may add in the future and the Office365_ServiceStatusExtended will also group the Features that are found as below: Hopefully this will help a few people out as its something that I have wanted for a while Let me know if you can think of any tweaks that may need to be made and apologies I'm not the neatest of script writers but it definitely does the job!213Views2likes14CommentshostProps.set() workaround
Hey all, So it looks like I'm not the only one trying to find a way to update device properties on the fly using the collector. I'm not sure why a hostProps.set() isn'ta working function yet but my workaround involved making API calls when device properties needed to be updated right away. Of course, it doesn't make sense to have a collector server do the extra work of making API calls in order to accomplish this - especially considering it could cause downstream effects likeAPI throttling. So I went through the effort of learning some java and dissecting the collector jar files to figure out if there was some other way to do it. Here's what I found: import com.santaba.agent.debugger.* println "Updating property: system.${LMObj} :: OldValue: ${hostProps.get("system.${LMObj}")} :: NewValue: ${currentObj}" task = "!hostproperty action=add host=${hostProps.get("system.hostName")} property=${LMObj} value=${currentObj}" HostPropertyTask updater = new HostPropertyTask(task) updater.run() println updater.output return updater.exitCode A few words of caution: ❗❗❗❗❗This code updates SYSTEMproperties, NOT AUTO.* properties. This isa very important distinction. This functionality could really ruin your day if you go deploying a datasource that updates properties such as system.ips, system.hostname, system.<user>, system.<password>, etc.❗❗❗ This does not work unless youupdate your Agent config file (agent.conf) on thecollector (and restart the service for it to take effect): ❌ groovy.script.runner=sse ✔️groovy.script.runner=agent More on that here LogicMonitor could break this functionality at any time in a future release. I've only tested this on the latest few general releases and it appears to work well - but it could break at any time. The audit log won't tell you that the host properties were modified. Instead, you only see the results of the change: auto groupmembership, autodiscovery, SDT, etc. If the change itself gets logged somewhere, I don't know where you might find it. Lastly, you definitely should notbe using this method eachevery time your datasource runs. Make sure to implement some logic to only update the property if and when needed. I have no idea how well this code snippet scalesoutside of a few hundred resources per minute and since I haven't found any documentation on it, I'm using it sparingly and not yet heavily relying on it to work 100% of the time. That being said, so far it seems to work quite well. Feel free to report how well it worked out if you aren't afraid to scale it like crazy and measure the performance.150Views0likes3CommentsRapid7 integration with LogicMonitor
Hello y'all Has anyone integrated Rapid7 InsightOps alerting with Logic Monitor? When I create my alert on rapid 7, i have an option of providing a webhook to Logic Monitor. I didn't see any integration on the LM exchange portal. Has anyone done this? Tips and Guidance would be most appreciated114Views0likes6Comments[EventSource] FSLogix VHD Lock Alerts
Installation 1. Import the FSLogix Apps property source (L7K9XW) and create anew event source: Name: FSLogix Locks Applies to:hasCategory("FSLogixEnabled") Type:Script Event Script:Copy the code at the bottom of the post and save it as "fslogix.ps1". Upload this into LM and set these fields accordingly: Windows Script: fslogix.ps1 Parameters:##HOSTNAME## Schedule:5 minutes Add a filter:Type: Message Comparison: Contain Value:locked Clear after:60 minutes (or however long you want) Alert Messge: Host: ##HOST## Message: ##MESSAGE## Detected on: ##START## Note: Your collector will need permissions to view the event logs of the remote servers! Result: Save below as fslogix.ps1: $hostname=$args[0] $date = (Get-Date).AddMinutes("-5") $eventlogs = Get-WinEvent -ComputerName $hostname -LogName "Microsoft-FSLogix-Apps/Operational" | ? { $_.timecreated -gt $date } $object = New-Object System.Object $object | Add-Member -MemberType NoteProperty events $events $object.events = @() foreach ($event in $eventlogs) { $obj = New-Object System.Object $obj | Add-Member -type NoteProperty -name happenedOn -Value $event.TimeCreated.ToString("yyyy-MM-ddTHH:mm:ss") $obj | Add-Member -type NoteProperty -name severity -Value $event.LevelDisplayName $obj | Add-Member -type NoteProperty -Name message -Value $event.Message $object.events += $obj } $output = $object | ConvertTo-Json return $output76Views0likes0CommentsCisco Prime License Manager
Locator ID:22K7WT Properties Required:cisco.prime.user && cisco.prime.pass Ensure the following services are running: - 'Cisco Prime LM Resource API' - 'Cisco Prime LM Resource Legacy API' See CLI commands below: admin:utils service activate Cisco Prime LM Resource API admin:utils service activate Cisco Prime LM Resource Legacy API19Views0likes0CommentsVMs monitoring based on vSphere folder
Hello, In our infrastructure we have a vCenter that contains several thousand VMs. The monitoring of this device is complex and slowed down by the tens of thousands of datapoints extracted for each VM, moreover it is necessary to apply different rules and standards depending on the folder where the VM resides. Consulting the query system (Groovy) of the "VMware_vCenter_ *" datasources, I thought of changing the vCenter query method via ESX API, in particular by invoking the Java "searchManagedEntity" method in an alternative way during the instance extraction phase, in order to focus the extraction only on specific and reduced number of instances. In this way, it is possible to extract only the information relating to the contents of a folder on vCenter, which was the set goal. From: // Open a connection to the vSphere API, get a service instance and root folder def svc = new ESX(); svc.open(addr, user, pass, 10 * 1000); // timeout in 10 seconds def rootFolder = svc.getServiceInstance().getRootFolder(); // Get all of the VM's def vms = new InventoryNavigator(rootFolder).searchManagedEntities("VirtualMachine"); To: def String fName = hostProps.get("esx.folder"); // Open a connection to the vSphere API, get a service instance and root folder def svc = new ESX(); svc.open(addr, user, pass, 10 * 1000); // timeout in 10 seconds def rootFolder = svc.getServiceInstance().getRootFolder(); def fTarget = new InventoryNavigator(rootFolder).searchManagedEntity("Folder", fName); // Get all of the VM's def vms = new InventoryNavigator(fTarget).searchManagedEntities("VirtualMachine"); This code is from the "VMware_vCenter_VMDiskCapacity" datasource, but the same concepts apply for all the datasources based on Groovy + ESX API on vCenter instance level interrogation. By cloning the DS and adding different properties it is even possible to apply the same code on multiple different folder(s) of the same vCenter, rendering so simpler (at least for us) to define custom alert exceptions or alert rules (for example, one for "production" VMs, one for "testing" VMs, and so on...).33Views1like1CommentCisco Info PropertySources
This one goes into some additional detail but hasn't been completely cleaned up for debugging purposes. The ones that have switch stacks pull all the stack serials and model numbers. Work in progress was the versioning. We have about 1500+ network devices across many many different models and versions so this has taken a little bit of work to get to work across all. MWXMXZ - Cisco-IOS FKA79M - Cisco IOS XE 9LF63N - NXOS G366DD - Cisco ASA112Views2likes13CommentsAdd "service snmpd status | grep 'Active'" command output to the device property using PropertySource (Linux Host)
Hi All, I am trying to add "service snmpd status | grep 'Active'" command output to the device property using PropertySource (Linux Host), but for some reason its not working. Is anyone tried to add? Please help to get snmpd details in device property. Thanks, Jnanesh4Views0likes2CommentsSanta Barbar Air Quality DS
If anyone's interested, I threw together a simple DS to pull air quality indices from ourair.org. The data's all around Santa Barbara, which is where our headquarters are. I did it more for an example and for our internal folks who live in that area. PRMARN - SB_AirQuality Either add ourair.org into your LM portal with display name of "Air Pollution Control District of Santa Barbara" or change the AppliesTo to apply to an existing device (a collector or some other device). The device it appliesto is irrelevant as the URL is hard coded into the script.Anonymous4 years ago4Views0likes8Commentsbetter windows event sources
We discarded the default modules for Windows events long ago after realizing their filtering was unusable (events are identified by event source AND event ID, not just event ID as assumed by the default modules). Our modules use a regex matching both event source and ID to fix, and we reference multiple properties so there can be filters defined generally and for specific cases. This allows higher level values to be overridden if needed, or to extend those with lower level values, as needed. I recently updated these to add 2 more filter properties so we can extend or override with better granularity (labeled universal, org, global and local). Exchange: R7JXYE System: FAAYZ7 Application: 94ML93 There is more detail in the technical notes (as much as I could fit before hitting undocumented and obscure field length restrictions). These were just marked for public sharing, so will need security review as they are using Groovy. One more point -- we do have some global hardcoded filters in at least one of the modules. If that is a problem for anyone, we could add a new property to enable those, leaving them disabled by default.29Views0likes3CommentsDisplay IIS App Pool Process Memory and CPU use with App Pool name as Instance ID
I have been using a Powershell script for a while now that used Batch Script (to not use as much collector resources as my company has TONS of App Pools per box) This linksWin32_PerfRawData_PerfProc_Process withWin32_Process using the process ID for discovery. Instead of having an ugly instance name (W3WP#1,W3WP#2 etc) it now will display the AppPool name from the command line. We recycle AppPools nightly. This will run discovery every hour as it stands although, the collection uses instance ID ,matching the Command line so no holes in the data LM Exchange ID: 2WHYY4186Views1like3Commentscheck sensitive windows groups
In our previous life, we had written a Nagios plugin to check whether a sensitive Windows group had changed (e.g., Domain Admins). I created a replacement for this within LM, but since we can't really keep track of deltas without a key/value store, we use a property for each group that specifies the expected members, which should be updated when membership changes intentionally. We also use a property to list the groups for AD so we can store useful ILPs, but since those ILPs are not passed to the collection script (they could be, just are not currently passed for Powershell), the list of groups that can be checked is restricted to what is builtin to the collection script. For one or more AD controllers then, you would specify (for example): windows.groupcheck.list: Domain Admins windows.groupcheck.spec.Domain_Admins: administrator,alice,bob If the list diverges, the datapoint for that group will alert. There is also a total count of members that is tracked, and can be used to set an alert if needed (e.g., some groups like Schema Admins should normally be empty, but that can be handled by the spec). 2Y9FM626Views1like9CommentsDatto Backups & Devices
I figured I would share these with anyone who would want them. The first DataSource reaches out to the Datto Portal and gathers info on the BCDR via the Datto portal and utilizing their REST API. It is ATFZGD. As part of this DataSource, it pulls basically all values that are provided and a few complex just to convert KB to GB and get percentages. Active Tickets Agent Count Alert Count Local Storage Available Local Storage Used Offsite Storage Used Share Count The second one pulls the same BCDR devices and gets the backup status from them. It is 2KZEKJ. As part of this DataSrouce, we pull the below info. Also as part of this, we have active discovery set to every hour so the error message for the backup can be used as an auto property, that we can then pull into the alert message. There are 2 complex data points as well. 1 is so archived backups in error don't trigger and another so paused backups in archive don't trigger. Archived Last Backup Status Last Backup Timestamp Last Offsite Last Screenshot Attempt Last Screenshot Attempt Status Last Snapshot Paused Protected Volumes Count Total Local Snapshots Unprotected Volumes Count126Views0likes8CommentsPublic vs. Private Modules and the new Exchange
@Stuart WeenigThere seems to be a number of publicly given data sources which are no longer available. Here's a link to a bunch more which can no longer be accessed. (written by the LM creator himself) /topic/354-dependencies-or-parentchild-relationships/11Views0likes3CommentsDynamic Instance Group Alert Tuning
This is not an advertisement by any means, just offering to help anyone who struggles with this as well. As an MSP,we have struggled with how to handle alert tuning in bulk with it comes to things like Interfaces (instances). Some of the interfaces you want to alarm as critical, some you want as error and others you don't care about at all. LM provided a partial fix for that with their Groovy based "Status" alarm based on the interface description, but it didn't take it far enough. We started creating manual interface groups called "Critical" and performing Alert Tuning on that "parent" only to find out that it doesn't work as interfaces move in and out of it. I was beyond disappointed, but it said it right at the top of the page:Changes made to Alerting or Thresholds will only affect existing instances currently in this Instance Group. Instances added later will not be subject to the changes. Anyway, long story short we finally decided to write our own application to do it and built it in Azure. We built it to handle multiple data sources so we could group other instances (like VMware vDisks) and do the same bulk changes. It was written to be a data source in your environment, so that you can apply it to whatever devices you want and just call out to the API with the device name. If you have any interest in using it, let me know. There are costs associated as Azure bills based on usage, but it is pretty small for us (< $200/mo). Trust me, I wish LM solved this without having to write the app!45Views1like1CommentWindows RDS Gateway Stats
Hi All, We are monitoring a server running RDS Gateway manager (2016) . the client would like to see user stats / logon times /durations etc . I have this in as a feature request but LogicMonitor just doe not see the role I guess..Was just wondering if anyone had any thoughts on this ?48Views0likes5CommentsPropertySource - Certificate Information
We had to find out who issued the SSL cert on port 443 for a bunch of network devices and servers. So I wrote this TCPMLH. It pulles the IssuerCN, SubjectCN, ValidFrom and ValidTo info for the certificate. It could easily be modified to look at other ports as well if wanted. It depends on a PropertySource that was listed here awhile ago 'DataSources_List', which I don't have the key for, but can share the XML if needed.27Views0likes7CommentsvCenter ESXi Hardware Sensors
This DataSource is the vCenter equivalent of VMware_ESXi_HardwareSensors one. The new DataSource is called VMware_vCenter_HostHardwareSensors (lmLocatorCDGEFM ), it targetssystem.virtualization =~ "VMware ESX vcenter" and provides hardware sensor instances that are titled host name - sensor name grouped by type. Discovery Script import com.santaba.agent.groovyapi.esx.ESX import com.vmware.vim25.mo.* import java.security.MessageDigest def host = hostProps.get("system.hostname"); def user = hostProps.get("esx.user"); def pass = hostProps.get("esx.pass"); def addr = hostProps.get("esx.url") ?: "https://${host}/sdk"; // We are confident we can parse names for these devices def deviceSensorTypes = ["fan", "storage", "power", "chassis", "voltage", "battery", "processors", "watchdog", "cable/interconnect", "memory", "slot/connector", "system", "boot", "logging", "management subsystem health", "temperature", "other", "platform alert", "chip set"]; def svc = new ESX(); svc.open(addr, user, pass, 10 * 1000); // timeout in 10 seconds def si = svc.getServiceInstance(); def rootFolder = si.getRootFolder(); // Get ESX hosts HostSystem[] esxHosts = new InventoryNavigator(rootFolder).searchManagedEntities("HostSystem") esxHosts.each { esx -> // Find and iterate over host sensors def numericSensors = esx?.runtime?.healthSystemRuntime?.systemHealthInfo?.numericSensorInfo; numericSensors.each() { sensor -> if(sensor.sensorType != "Software Components") { def name = sensor.name; // Check the name for a device format. Strip alert status from the name // Example: "[Device] Processor 0 CPU1 UPI Link 0: Config Error - Deassert" if(deviceSensorTypes.contains(sensor.sensorType.toLowerCase())) { // Remove status suffix name = name.split(" -").first(); // Clean up the [device] prefix that shows up some times. if (name.startsWith("[Device]")) { name = name.replace("[Device]", ""); } } name = esx.name + ' - ' + name.trim().capitalize(); def wildvalue = MessageDigest.getInstance("MD5").digest(name.bytes).encodeHex().toString(); def properties = [:]; properties["sensor_type"] = sensor.sensorType.capitalize(); properties["host"] = esx.name; // Generate a unit string if(sensor.baseUnits) properties["sensor_units"] = sensor.baseUnits; if(sensor.baseUnits && sensor.rateUnits) properties["sensor_units"] = "${sensor.baseUnits}/${sensor.rateUnits}" println "${wildvalue}##${name}######${properties.collectEntries{ k, v -> ["auto.${k}=${v}"]}.keySet().join('&');}" } } } return 0; Collection Script import com.santaba.agent.groovyapi.esx.ESX import com.vmware.vim25.mo.* import java.security.MessageDigest def host = hostProps.get("system.hostname"); def user = hostProps.get("esx.user"); def pass = hostProps.get("esx.pass"); def addr = hostProps.get("esx.url") ?: "https://${host}/sdk"; // We are confident we can parse names for these devices def deviceSensorTypes = ["fan", "storage", "power", "chassis", "voltage", "battery", "processors", "watchdog", "cable/interconnect", "memory", "slot/connector", "system", "boot", "logging", "management subsystem health", "temperature", "other", "platform alert", "chip set"]; def sensorStateMap = ["green" :1, "yellow" :2, "red" :3]; def svc = new ESX(); svc.open(addr, user, pass, 10 * 1000); // timeout in 10 seconds def si = svc.getServiceInstance(); def rootFolder = si.getRootFolder(); // Get ESX hosts HostSystem[] esxHosts = new InventoryNavigator(rootFolder).searchManagedEntities("HostSystem") esxHosts.each { esx -> // Find and iterate over host sensors def numericSensors = esx?.runtime?.healthSystemRuntime?.systemHealthInfo?.numericSensorInfo; numericSensors.each() { sensor -> if(sensor.sensorType != "Software Components") { def name = sensor.name; // Check the name for a device format. Strip alert status from the name // Example: "[Device] Processor 0 CPU1 UPI Link 0: Config Error - Deassert" if(deviceSensorTypes.contains(sensor.sensorType.toLowerCase())) { // Remove status suffix name = name.split(" -").first(); // Clean up the [device] prefix that shows up some times. if (name.startsWith("[Device]")) { name = name.replace("[Device]", ""); } } name = esx.name + ' - ' + name.trim().capitalize(); def wildvalue = MessageDigest.getInstance("MD5").digest(name.bytes).encodeHex().toString(); println "${wildvalue}.state=${sensorStateMap.get(sensor.healthState.key, 0)}"; println "${wildvalue}.reading=${sensor.currentReading}"; println "${wildvalue}.modifier=${sensor.unitModifier}"; } } } return 0;80Views0likes11CommentsSalesforce Status Page
Hi All, I have been looking around for a datasource/check that can gather salesforce status page info from here: https://status.salesforce.com/instances/EU26 I did see the Statuspage.io stuff and that works great but Salesforce unfortunately don't publish to there. Unfortunately I've not found anything as of yet however if anyone has anything already made it would be much appreciated, if not a pointer to a good starting point would be much appreciated Many thanks Kyle19Views0likes3CommentsPaloAlto 'apikey' PropertySource
Hello! I've created a property source (PS script) that will retrieve/populate automatically the 'paloalto.apikey.pass' property withinPalo Alto firewalls (since a bunch of datasources require that key). This will be easier than retrieving the api key manually & then createthe custom propertyfor each firewall. this will makeuse of the ssh credentials & also requires aLM apikey in order to actually PATCH the device in question. Sharing this with everyone in case it is useful for you guys as well. I've tried to publish it in LM Exchange but I'm retrieving theerror below: I'm new to LM so, excuse me if I'm being noob &missing an obvious thing? Shared the PS script within GitHub ->https://github.com/vitor7santos/LogicMonitor.git Feel free to use it & let me know your comments/suggestions/etc... Regards,64Views0likes4CommentsNetApp Cluster Health
We found out the hard way this past weekend that the current NetApp DS suite is missing a crucial check for cluster member health. You can deploy it from here (once the code is reviewed -- hopefully quickly as it is just a clone with a different query and datapoint set). FJTRGL22Views0likes6CommentsInfoblox Module; Which Appliances?
For those utilizing the Infoblox module, which appliances are you adding into LogicMonitor? Our Infoblox environment has 2 HA Virtual IPs in front of 2 nodes each, we have 2 individual Azure Appliances, reporting appliance and network insights appliance. When adding all of them I see that the VIP shows 2 nodes which makes me think we only need to be monitoring the 2 VIPs ( Not the HA IPs but the VIP in front of the 2 nodes ) the 2 Azure Appliances, 2 Azure Appliances, Reporting and Network insights. Is this what others are adding to LogicMonitor instance?24Views0likes2CommentsEvent Source for log file monitoring
We're looking to have log file monitoring for file extension*.rpt and SQL log files. LM does not appear to support anything (out of the box)other than .log and .txt. Has anyone done this via script with other file types in Windows? If so, can you share your solution?61Views1like5Comments