Recent Discussions
Participants needed for UX and Product research
I am a UX Designercurrently conducting research on the user experience of LogicMonitor in order to help the Product and the Design teams better prioritize improvements and product direction. I washopingthat some of you might be interested in participating in my research. I am interested in talking to anyone who interacts with LogicMonitor, from users who only occasionally use LogicMonitor to those whospendmost of their day in the product. This is a quick, 30 minute interview, at your convenience, over Zoom. I will ask youquestions about the way you work, your usage of LogicMonitor, and your impressions of the product. You will also have a chance to call out any additional problems or needs with LogicMonitor. None of the questions are about personal or private topics. This is not a Marketing or NPS survey, its purpose is entirely for product planning and any data collected will be kept confidential and stripped of identifying information. If you are interested, please let me know and I will reach out to you about setting up the session.Keith_S2 years agoFormer Employee392Views45likes21CommentsHPE Aruba Orchestrator 9.3 (Formerly Silver Peak) early adopters?
In our lab, we’re working on a new suite of HPE ArubaEdgeConnect SD-WAN modules using Aruba Orchestrator 9.3 and ECOS 9.3 If you happen to be on Orchestrator 9.3 and would like to participate in the R&D process, please send me a DM. Thanks. https://www.arubanetworks.com/techdocs/sdwan/Patrick_Rouse2 years agoProduct Manager435Views44likes4CommentsAccessing the LogicMonitor REST API with Postman and LMv1 API Token Authentication
Introduction Postmanis widely used for interacting with various REST APIs such as LogicMonitor's. However, there is no out-of-the-box support for the LMv1 authentication method which we recommend as a best practice. This document describes how to configure Postman to use LMv1 authentication when interacting with our REST API. Overview Postman'spre-request script functionalityprovides the ability to generate the necessary Authorization header for LMv1 authentication. As the name suggests, the pre-request script runs immediately before the request is made to the API endpoint. We set the pre-request script at the collection level in Postman so that it will run automatically for every request that is part of the collection. The script requires three input parameters: a LogicMonitor API token (or ID), its associated key, and the full request URL. These parameters are made available to the script by creating a Postman environment and setting the values as environment variables. If you need to access multiple LogicMonitor accounts (portals), create a separate environment for each to store the applicable API and URL information. Since all API requests to a given account use the same base URL (https://<account>.logicmonitor.com/santaba/rest) it is convenient to store this as an environment variable. The output of the script is the value of the Authorization header. The script writes the header value to an environment variable which is then inserted as the Authorization header value in the request. Instructions 1. Download and installPostman. 2. Launch Postman andcreate a new collectionthat will be used for all LogicMonitor API requests. 3. In the create collection dialog, select the "Pre-request Scripts" section and paste in the following code. // Get API credentials from environment variablesvar api_id = pm.environment.get('api_id');var api_key = pm.environment.get('api_key'); // Get the HTTP method from the requestvar http_verb = request.method; // Extract the resource path from the request URLvar resource_path = request.url.replace(/(^{{url}})([^\?]+)(\?.*)?/, '$2'); // Get the current time in epoch formatvar epoch = (new Date()).getTime(); // If the request includes a payload, included it in the request variablesvar request_vars = (http_verb == 'GET'||http_verb == 'DELETE') ?http_verb + epoch + resource_path : http_verb + epoch + request.data + resource_path; // Generate the signature and build the Auth headervar signature = btoa(CryptoJS.HmacSHA256(request_vars,api_key).toString());var auth = "LMv1 " + api_id + ":" + signature + ":" + epoch; // Write the Auth header to the environment variablepm.environment.set('auth', auth); 4. Create anew environment. Create the environment variables shown below. You do not need to provide a value for the "auth" variable since this will be set by the pre-request script. Be sure to use the api_id, api_key, and url values appropriate for your LogicMonitor account. 5. Create arequestand add it to the collection with the pre-request script. A sample request is shown below with the necessary parameters configured. 1. Set the environment for the request, 2. Set the HTTP method for the request. 3. Use{{url}} to pull the base URL from the environment variable. Add the resource path and any request parameters your API request may require. 4. Add the Authorization header and set the value to{{auth}}to pull the the value from the environment variable. 5. POST, PUT, and PATCH requests only: if your request includes JSON data, be sure to select the Body tab and add it. 6. Press Send to send the request. The response will appear below the request in Postman. Troubleshooting You receive the response "HTTP Status 401 - Unauthorized" Confirm the following: • The proper environment has been specified for the request. • The necessary environment variables have been set and their values are correct. Note that the script relies on the specific variable names used in this document: "api_id", "api_key", "url", and "auth". • The request is a member of the collection configured with the pre-request script. Postman reports "Could not get any response" or "There was an error in evaluating the Pre-request Script: TypeError: Cannot read property 'sigBytes' of undefined" Make sure you have set the proper environment for the request and all necessary environment variables and values are present.Kurt_Huffman7 years agoFormer Employee3.4KViews42likes42CommentsPalo Alto Prisma SD-WAN (formerly CloudGenix)
We have developed new Prisma SD-WAN modules that use the Unified SASE SD-WAN API to monitor ION performance, health, tunnels… We’re looking for customers who already monitor their ION devices via SNMP that would be interested/willing to work with us to verify that that data we’re collecting via Palo Alto’s API matches what we get with SNMP. Two requirements: You are currently monitoring discrete ION devices via SNMP Your CloudGenix portal has been migrated to Prisma Cloud. If you meet these requirements and would like to be considered for pre-release environment verification, please DM me. This pre-release testing would involve LM running some Palo Alto Prisma Unified SASE SD-WAN API calls to compare the results against what we get from SNMP. This does not involve/required adding modules to your portal. However, after this environment verification, we’d be happy to work with you as an early adopter of the new modules.269Views38likes4CommentsCisco Catalyst SD-WAN monitoring (formerly Viptela)
We are currently testing pre-release modules for Cisco Catalyst SD-WAN that leverageCisco’s SD-WAN Bulk API. https://developer.cisco.com/docs/sdwan/#!bulk-api/bulk-api If you would be interested in working with us to validate their functionality, please feel free to reach out to me via DM. To be a candidate, your SD-WAN Controllers should be at or greater than 20.6, where the current version is 20.12. Below are some screenshots of these pre-release modules running against the Cisco DevNet SD-WAN Sandbox environment.206Views32likes2CommentsMeraki Cellular Gateways and Sensors
We’re planning R&D that “aims” to monitor Meraki MG Cellular Gateways and MT Sensors and to give them their own Topology Map graphics. Please DM me if youuse either of these types of Meraki devices and would like to participate in the R&D process.Patrick_Rouse2 years agoProduct Manager119Views30likes5CommentsNew VMware modules dropped
Did anybody else notice the ~44 new and ~5 updated modules around VMware dropping in the last hour or so? Does anyone know how to implement these new modules? Since there was talk of making the instances into resources I don’t want to just bring them in without knowing how it’s going to mess with my device list (which is tightly bound to billing for us).Anonymous2 years ago833Views30likes41CommentsDevice and Alert counts per group
I actually wrote a first version of this back in 2016 but for whatever reason didn't post it here, so, you may be amongst the 30+ customers using older versions of these modules. However, these, at v3, now use LM API v3 (no major change from v2), have additional diagnostic datapoints in the event of code failure, more intelligent alerting on failure, slightly revised graphs, and a substantially cleaner set of code. And, I figured it was time to write this up. I've kept any prior datapoint names unchanged, along with the actual DS names, so import *shouldn't* cause any issues with historical data, dashboards, alert rules, etc etc etc, but as ever it's possible I missed something so please exercise the usual cautions when importing any updated DataSource. The DS Display Names have changed, to "LogicMonitor ..." to align with our core portal metrics DataSources, and the AppliesTo now also aligns with those core modules (previously, in Exchange, these modules were saved with AppliesTo of false(), for you to determine application after import). What: API-calling DataSources to (as the names suggest) allow you to track device and alert counts for any resource groups you so choose within your LogicMonitor platform. Why: This finds particular application for MSPs wanting to track platform usage and alert load per end customer, for example to ensure said customers are being billed appropriately. However, they are also in use with large enterprise customers, to track LM consumption per region, business unit, function team, etc. You will need: LogicMonitor API credentials set as Resource Properties for whichever Resource(s) you apply either or both module to (by default, your '<accountName>.logicmonitor.com' portal resource, if you've set one up). These modules accept the same properties as our core LogicMonitor_Portal_xxx modules andwill acceptlmaccess.id, logicmonitor.access.id, apiaccessid.key for the API token ID; andlmaccess.key, logicmonitor.access.key, or apiaccesskey.key for the token key (in those orders of preference). The script will take the account name directly from collector settings. Then what? Whether you use the Active Discovery or Manual Instances version (or indeed both) will depend on which groups you want to monitor. Active Discovery option: If there is a programmatic way to determine groups, use the AD version. This is by far preferred as it's then "fit and forget" and will find newly-matching groups as they come in to being, and you do this by creating a new Resource Property: deviceAndAlertCount.groupsFilter ...with a value that is a valid API filter for the /devices/groups API call. Example: Imagine you're an MSP with a folder structure where you have a "Customers" resource group, and per-customer groups within that, e.g.: You'll likely want to monitor totals for each Customer's top level group, but you likely won't care about the specific breakdown per subgroups within a customer (e.g. you likely won't care about the count specifically within the "Cisco UCS" group under the GDI group). In the above example, your API filter could be: fullPath:"Customers/*",fullPath!:"*/*/*" ...i.e., all groups with a fullPath starting "Customers/" but excluding any group whose fullPath contains two or more slashes. Or e.g.: parentId:"[system.deviceGroupId of the Customers Group]" You can test your proposed filters by running the AD script code in the Collector debug. If you don't set any filters, the script will fall back to only creating an instance for the root folder of your account (groupId of 1). Any valid API filter(s) can be used within this property value, for example on fullPath, name, id, parentId, etc. Seehttps://www.logicmonitor.com/swagger-ui-master/dist/#/Device Groups/getDeviceGroupListfor expected fields. Note that instances are persistent,to accommodate (a) people breaking the property value, and (b) e.g. customers leaving, customer folders getting accidentally deleted, etc. You can manually delete instances if necessary. Manual Instances version: If you really really really can't come up with a sensible programmatic way to grab the groups you want (and there may be times when this is true), the manual version allows to to set arbitrary instances. Apply the DataSource as usual; on any applicable resource it will then appear in the list of modules you can add instances for, under the "Add Monitored Instance" option for the Resource (Manage → Add Monitored Instance): ...select the DataSource from the list: ...fill in the fields and save: The Name field can contain pretty much whatever you like. Typically you'd add the <groupName>, or <path>/<groupName> for example, but whatever works for you is OK. This is how the instance will be seen within the device tree. The Wildcard valuemustbe the numeric group ID (system.deviceGroupId) of the Group to be monitored. Anything non-numeric, or a number that does not exist as a group ID, will cause an error (as you'd expect) and an absence of useful data (as you'd also imagine). The Description is optional. Repeat as often as you like for whatever groups you need. Update 2022-10-24: Version 3.1 includes Kubernetes counts, as these are now returned in the API response. Update 2023-07-07: Version 3.2 corrects a collection script error that was causing the double-counting of alerts that were both ACKed *and* SDTed, which in turn led to an under-counting of outstanding alerts. Update 2023-11-15: Version 3.3 corrects a bug in the rate limit retry routine, which almost no-one will ever have hit. LM Exchange Locators: Active Discovery version, v3.3:MK2CCH Manual Instances version, v3.3:X946LXAntony_Hawkins3 years agoEmployee533Views30likes9CommentsExcluding VMware VMs from instance discovery
When we add a vCenter into Logic Monitor, the VMs in it’s managed clusters are discovered as instances of underneath datasources applied to the vCenter, like: VMware VM Status VMware VM Snapshots VMware VM Performance Sometimes there are VMs that we have no interest in monitoring, so we don’t want them to be picked up by these datasources. At the moment, we’re manually adding an Instance Group, putting those VMs in the group and then disabling alerts, which is quite a manual process. Ideally we’d like LM to not discover VMs that have had a specific tag/value applied to them in vCenter. I think we should be able to do this by modifying the Groovy script used for Active Discovery on these data sources, but I’m not sure how to go about that. Has anyone managed to do something similar? DaveSolved392Views29likes16CommentsCan I monitor vCenter tags and create an alert if a computer doesn't have one?
Hi, We use vCenter to manage our VMs. We have the hosts in LM. We currently have a process where we get an email every morning that has VMs that don’t have any tags. We use Tags to manage backup schedules and things so not having any tags is bad. Anyway, I’m wondering if that’s something that we could use LM to monitor. I don’t need to confirm what the tag is, I just need to know if any VM doesn’t have any tags at all. Is that something we can do with the build-in checks LM does or is that something that would have to be created by hand? Thanks.Solved296Views28likes5Comments