Meraki Monitoring
Hello, I was wondering if anyone out there would be able to help me out with the proper way to monitor multiple Meraki devices. The use case will be to have the ability to monitor one Meraki Firewall and multiple Meraki switches in multiple sites. However from my understanding they are under one cloud access URL. What would the best coarse of action to monitor these devices to get individual device statistics? Thanks in Advance48Views0likes4Commentshow to monitor a UNC path to make sure a new file is added to the folder at 10am everyday.
I need to monitor a UNC path to make sure a new file is added to the folder at 10am everyday. How can I set LogicMonitor to alert us when a new file has NOT been added to the folder at 10am? thanks16Views0likes4CommentsDataSource Dependencies
With the issues we've been having with collectorresource exhaustion, I've been thinking about ways to reduce the amount of dataSources that run on the collectors at any given time. It occurs to me that if there is a host down, all of the datasources are still trying to run against it and having to await timeout before releasing their resources on the collector. I'd like to submit that a Host down status should issue a partial SDT for the device that would prevent all but the host status datasources from running against that device. The host status change could then remove the SDT once it has cleared. This would prevent devices in environments that spin them up / down based on load rather than a schedulefrom taking up too many collector system resources during their down time. It would also help alleviate strain during larger outages while still providing just the actionable information necessary to address the situation.Cole_McDonald5 years agoProfessor4Views0likes0CommentsAdd device to Collector with least devices in rest api
I hope someone here can help, Using powershell and the rest API I would like to do a lookup of selected collectors find which has the least amount of devices on it then add the a new device. I have the script for adding the device and that works greatjust need the collector lookup, has anyone done something similar. Thank you in advanced for all help provided PaulSolved31Views0likes49CommentsPagerDuty Integration? I'm stuck...
Howdy, I'm trying to setup the integration between LM and PD and I'm stuck at creating the user. I'm going through this LM document: https://www.logicmonitor.com/support/alerts/integrations/pagerduty-integration/ And this PD document: https://www.pagerduty.com/docs/guides/logicmonitor-integration-guide/ I have everything setup but I'm not sure where in LM I go to add this integration as a contact method for a user in an Escalation chain to actually put it into use. Can anyone help me figure out where to go next? Thanks!4Views0likes2CommentsHow to display the date?
Hi, I would like to know how to display the date and number of days between two dates. For an example, I want to display today's date or current timestamp. Is there any possibility, that we can use any function (as like "getdate()" in SQL) to fetch the date/timestamp in any of the Widgets? Also I want to know how to calculate and display the number of days between any particular dates. Please help me on this. Thankyou3Views0likes2CommentsInternal check with groovy script to telnet SMTP Port -25
Hi Everyone, I was referring to :https://www.logicmonitor.com/support/terminology-syntax/scripting-support/groovyexpect-text-based-interaction/and there it's written a groovy script example which describes "Telnet/Port Connectivity: Test FTP Server Availability". However, I'm specifically looking to telnet to multiple hosts over port 25 from different devices. For e.g.: let's create a DS, then in the devices [I'm going to add a multiple collectors from where I want to telnet to multiple hosts] and then the groovy script will execute and telnet to over port 25 from each of the separate collector to the specific hosts. Not sure how would I achieve this? Any interesting idea/suggestion/reference would be happy to listen? Thanks, Sam8Views0likes4Commentscan't turn alert off for specific instances with Ansible
Hi Everyone, The LogicMonitormodule for Ansible has that specific parameter 'alertenable' while using 'datasource as target to turn off alerts for specific instance in datasource group. However it's not working for me. Did anyone ever tried to use that parameter? https://docs.ansible.com/ansible/2.4/logicmonitor_module.html --- - hosts: localhost vars: company: mycompany user: myusername password: mypassword tasks: - name: Create a host group logicmonitor: target: datasource action: update displaname: server.com alertenable: no id: 123 company: '{{ company }}' user: '{{ user }}' password: '{{ password }}'1View0likes2CommentsGetting the max value of the day from the datapoint in table widget for capacity planning
Hi, I am trying the display max value of the day (last 24 hours) from the datapoints in a table widget in the dashboard, but its showing current datapoint value. Is there any method to achieve this? Like to understand any other widget display max value of the day in dashboard.12Views0likes6CommentsCluster Counter Conundrum: DataSources and Devices
We are in a Microsoft clustered environment: The cluster hosts have multiple vNICs and therefore multiple IPs. Each of those IPs have a DNS entry associated with them. As such, LM sees them as separate entities and creates devices for each of them. Every WMI counter that wants to check on the host is also firing for each of the vNICs. The DNS entry associated with the secondary vNICs on the host each relate to a customer evironment. We'd like to be able to present each customer's metrics to them (ultimately server CPU, MEM, NET) for their part of that resource in their dashboards. Can anyone think of a way to get LM to not duplicate WMI effort on those servers? Is it just a matter of waiting for the dependency mapping to come out and start alleviating those issues? Do I need to write something to prevent them from collecting that data? If I do so will that prevent the customer's dashboard from presenting the data there? I sound like the end of an old timey episodic radio show. Stay tuned next week :)/emoticons/smile@2x.png 2x" title=":)" width="20">Cole_McDonald5 years agoProfessor4Views0likes0Commentshttps/SSH remote access authentication
I am wondering if there is a road map to remote authentication via SSH and HTTPS. I have multiple clients to monitor and I know several monitoring providers have a remote feature and then authenticate using the credentials already leveraged for configuration polling. Currently I can open up a remote ssh session to a network device but still have to enter the credentials even though they are already in LM. I am new to the LM community but figured I would ask. Thanks3Views0likes1CommentLinux Zombie Process Count
This comes up occasionally as a customer request - can we count zombie processes on Linux servers? Yes we can... KYNDEH This DataSource relies on SSH connectivity to the server, and therefore the Linux SSHPropertySource (in core), and once connected runs command: 'ps axo pid=,stat=' ...and then runs through the table counting all the occurrences of 'Z' in the stat column. :)/emoticons/smile@2x.png 2x" title=":)" width="20">Antony_Hawkins6 years agoEmployee0Views1like2CommentsCisco Router Throughput License Usage
Tossed this together today to track throughput license usage on platforms that license maximum levels(e.g., ISR4K)as the impact of exceeding this can be otherwise tricky to identify. Definitely could use more work, but a decent starting point. 7ZYRDHmnagel6 years agoProfessor12Views1like0Comments"Value" field breaks CSV Alert Report in some cases
The Value field for some alerts, most notably WindowsEvent Logs, isnot handled correctly when exporting alert history to CSV using the Reports function. For example, the following excerpt of what should be 4 rows from a CSV Alert Report that contains both valid and invalid row data in the Value field: Severity,Group,Device,Datasource,Instance,Datapoint,Thresholds,Value,Began,End,Rule,Chain,Acked,Acked By,Acked On,Notes,In SDT warn,GroupName,DeviceName,Organizational Units,ActiveDirectory_OrganizationalUnits,AnyChange,,A change was made to the configuration file,2019-09-10 20:17:39 MST,2019-09-10 20:20:49 MST,N/A,N/A,yes,AckuserName,2019-09-10 20:20:49 MST,ACK,no warn,GroupName,DeviceName,Windows Domain Services Event Log,Windows Domain Services Event Log,4725,,"A user account was disabled. Subject: Security ID: Masked_SID Account Name: Masked_Account Account Domain: Masked_Domain Logon ID: Masked_ID Target Account: Security ID: Masked_SID Account Name: Masked_Account Account Domain: Masked_Domain",2019-09-12 14:42:12 MST,2019-09-12 14:53:42 MST,N/A,N/A,yes,AckUserName,2019-09-12 14:53:42 MST,ACK,no warn,GroupName,DeviceName,Windows System Event Log,Windows System Event Log,7023,,"The Interactive Services Detection service terminated with the following error: Incorrect function.",2019-09-27 02:04:48 MST,2019-09-27 03:05:09 MST,AlertRule,EscalationChain,no,,,,no HTML and PDF reports don't appear to have this issue.0Views0likes0CommentsHyper-V 'Sources WMI vs. Batchscript
We have far more WMI requests than I'd like to see on our collectors. Does anyone know if using a batchscript 'Source uses fewer TCP sockets/ephemeral ports to perform data gathering over a WMI based 'Source? The Hyper-V metrics are fairly aggressive in our environment.Cole_McDonald6 years agoProfessor3Views0likes5Commentsdependencies, again
We continue to do battle with LM when alerts trigger due to dependent resource outages. I know the topology mapping team is working on alert suppression, but I am not convinced that will solve all problems regardless of how well they succeed. We really need a way to setup dependencies within logic modules and it should not need dozens of lines of API code each time (most of which should be made available asa library function IMO). One fresh specific example -- site with multiple firewalls in a VPN mesh running BGP. One firewall goes down, then all other firewalls report BGP is down. We care about BGP down, so we have alerts trigger escalation chains. It should be possible to define a dependency in the datapoint that suppresses the alert if the remote peer IP is in a down state. There is no way to express this in LM right now and that leads to manyalerts in a batches, and that leads to numb customers who ignore ALL alerts.mnagel6 years agoProfessor5Views1like3CommentsSnapshot 'Age'
Looking for something unique here. I am familiar with the NetApp_Cluster_Snapshots datasource (MHNTRC). I would like to setup an alert if 1. No detected snapshot taken in the last XX hours (Ensure Snapshot/SnapManager is working properly) 2. Snapshot exists greater than XX hours (Retention) Thoughts, guidance?WillFulmer6 years agoNeophyte9Views0likes4CommentsAdd Cloud Account (AWS) from API
Greetings all, I thought someone would have asked this, but unable to find anything via search Is there an API path that can be used to add a AWS Cloud account? I currently can fully spin up a new AWS account, provision collectors etc, but I can't seem to find a way to add the AWS account and populate the created IAM Role into LogicMonitor.Unless I'm missing something in the swagger/docs. Anyone know if this is supported? Thanks!Solved6Views0likes1CommentCloud Collector to Consume AWS RDS Enhanced Monitoring
It looks like someone in my org had enabled "Enhanced Monitoring" for several AWS RDS instances--a surprise, to be sure, but a welcome one . I would love Cloud Collector method that can consume this data and display it along side all other metrics we are collecting in LogicMonitor. Implementation should be relatively simple. In the discovery, presumably usingdescribe-db-instances, we would just need a system.aws* property for the "dbiresourceid" which can be used to get-log-events.12Views0likes1CommentREST API endpoint device/groups/{id}/devices
We have a regional (Americas, EMEA, APAC) hierarchy in our Resources structure, and would like to be able to query via the API the total number of resources at a group level. Currently the API only returns items if the group has direct child resources, but if the group is there purely for taxonomy, then it returns no items. We would like to be able to call anendpoint similar to/device/groups/{id}/deviceswhich would return a total count of resources for the group, including all sub-groups. Our use case is cross charging the regions for LogicMonitor costs, and for this we need to be able to automate the counting up of resource items for each region. Basically we want to be able to get the same count we currently see in a resource group's tooltip, but via the API.Mosh6 years agoProfessor4Views0likes3CommentsHow to Monitor CRC Errors
This isn't a question but I didn't see anybody else post this. One thing we've noticed in our environment is that whena Gbic starts to go bad CRC errors start incrementing on a port. Currently LM doesn't monitor for CRC errors, that we could find. In order for this to work you have to go to the sub interface in config mode and turn on rmon statistics. Then you get OID 1.3.6.1.2.1.16.1.1.1.X.Y where X is for CRC errors and Yis the number you assigned as the process ID. For example on an HPE 5130 [HP5130-Ten-GigabitEthernet1/0/51]rmon statistics 1 owner LM Gives you OID 1.3.6.1.2.1.16.1.1.1.8.1. There are a lot of statics, for example the Y value of 3 gives you drops, the Y value of 5 gives you packets. <HP5930>dis rmon statistics EtherStatsEntry 1 owned by LM is VALID. Interface : Ten-GigabitEthernet1/0/51<ifIndex.12> etherStatsOctets : 1943607812, etherStatsPkts : 1369904813 etherStatsBroadcastPkts : 45523087 , etherStatsMulticastPkts : 97537116 etherStatsUndersizePkts : 0 , etherStatsOversizePkts : 0 etherStatsFragments : 0 , etherStatsJabbers : 0 etherStatsCRCAlignErrors : 0 , etherStatsCollisions : 0 etherStatsDropEvents (insufficient resources): 0 Incoming packets by size: 64 : 1917342 , 65-127 : 853770762 , 128-255 : 724653502 256-511: 178898662 , 512-1023: 107338128 , 1024-1518: 1995791637 So far we've found this process and OID to also work on Cisco switches. I hope this helps others.9Views0likes0CommentsMulti-Tenant per-customer alerting thoughts/ideas
@Moshand @Paul Armenakis1456476360/ @Paul ArmenakisLet's get this discussion started. How are you guys implementing this distinction/segregationof alerting for your customers? We've got our per client servers groups in folders within a Customers folder. The alerting flows through the rules top to bottom, so I'm handling customer specific alerts first with the highest load companies at the top of the list to lower the load on the LM servers as it tries to match the alert to a rule. As the rules stop running on a match, I have to build in notifications to our team whenever we hit one that is supposed to notify the client as well, as it won't move on to our team's default alert rules. I would love to see a pass-through check box to allow multiple alert rules to match against an alert for exactly these reasons. As far as these go... I have a teams integration, a teams + group email, and a ticket generation escalation chain to addressdifferent urgencies. The escalation chains all have a blank at the end so that they fire the most urgent one, then repeat on the blank... that way, we don't end up with a million of the same alert over and over... or worse, a million of the same ticket over and over.Cole_McDonald6 years agoProfessor13Views0likes2CommentsProvide a method for assigning group visibility independent of user roles.
We want to be able to show our clients their device groups in LM, butthat currently requires making a new role for every client user due to group visibility only being able to be modified on the role rather than onthe user directly. If we could assign visibility directly to users, that would allow us to control all non-group viewingpermissions for clients from a single unified role due to them having otherwise identical perms. There may be other ways to implement a solution to address this such as group inheritance, but the only option that currently existsis to manage hundreds of nearly identicalroles, each one attached to a single client user. Any general updates to customer permissions (stuff that isn't related to device viewing permissions)right nowrequires changing permissions in those hundreds ofroles to match each other rather than adjusting a single permission on a single role.1View1like6CommentsAlert Colors
I have a client that would really like to have one different color alerts. Depending on the display that is being used it can be very hard to dictate if it is an error or a critical level alert by the color. The orange alert just looks to similar to critical and end-users that do not use the platform every day cannot tell the difference. Either allowing the colors to be customer set per portal or a change set for all portals would be nice.0Views0likes1CommentCollector Sizes Changed?
Did the collector memory sizes change at some point? All of the calculations I've been doing for a 'source I'm writing has been looking at the medium collector as a 2Gb deployment and large as 4Gb. When I look in the collector configuration, the UI reads 4 and 8 respectively. Did I miss a memo? Does that change my instance threshold calculations for ABCGs?Cole_McDonald6 years agoProfessor15Views0likes4CommentsREST API full resource path listings in powershell
#!!! Requires Credential Manager 2.0 from the repository !!!# Import-Module CredentialManager function Send-Request { param ( $cred, $accessid = $null, $accesskey = $null, $URL , $data = $null, $version = '2' , $httpVerb = "GET" ) if ( $accessId -eq $null) { $accessId = $cred.UserName $accessKey = $cred.GetNetworkCredential().Password } <# Use TLS 1.2 #> [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 <# Get current time in milliseconds #> $epoch = [Math]::Round( ( New-TimeSpan ` -start (Get-Date -Date "1/1/1970") ` -end (Get-Date).ToUniversalTime()).TotalMilliseconds ) <# Concatenate Request Details #> $requestVars = $httpVerb + $epoch + $data + $resourcePath <# Construct Signature #> $hmac = New-Object System.Security.Cryptography.HMACSHA256 $hmac.Key = [Text.Encoding]::UTF8.GetBytes( $accessKey ) $signatureBytes = $hmac.ComputeHash( [Text.Encoding]::UTF8.GetBytes( $requestVars ) ) $signatureHex = [System.BitConverter]::ToString( $signatureBytes ) -replace '-' $signature = [System.Convert]::ToBase64String( [System.Text.Encoding]::UTF8.GetBytes( $signatureHex.ToLower() ) ) <# Construct Headers #> $auth = 'LMv1 ' + $accessId + ':' + $signature + ':' + $epoch $headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]" $headers.Add( "Authorization", $auth ) $headers.Add( "Content-Type" , 'application/json' ) # uses version 2 of the API $headers.Add( "X-version" , $version ) <# Make Request #> $response = Invoke-RestMethod ` -Uri $URL ` -Method $httpVerb ` -Body $data ` -Header $headers $result = $response Return $result } function Get-LMRestAPIObjectListing { param ( $URLBase , $resourcePathRoot , # "/device/devices" $size = 1000 , $accessKey , $accessId ) $output = @() $looping = $true $counter = 0 while ($looping) { #re-calc offset based on iteration $offset = ($counter * $size) + 1 $resourcePath = $resourcePathRoot $queryParam = "?size=$size&offset=$offset" $url = $URLBase + $resourcePath + $queryParam # Make Request $response = Send-Request ` -accesskey $accessKey ` -accessid $accessId ` -URL $url if ( $response.items.count -eq $size ) { # Return set is full, more items to retrieve $output += $response.items $counter++ } elseif ( $response.items.count -gt 0 ) { # Return set is not full, store date, end loop $output += $response.items $looping = $false } else { # Return set is empty, no data to store, end loop $looping = $false } } write-output $output } #!!! Change to your company name !!!# $company = "yourCompanyHere" $URLBase = "https://$company.logicmonitor.com/santaba/rest" # This will resolve to proper values if it's being run from inside LM $accessID = "##Logicmonitor.AccessID.key##" $accessKey = "##Logicmonitor.AccessKey.key##" if ( $accessID -like "##*" ) { # Not being run from inside LM - populate manually for testing Import-Module CredentialManager $Cred = Get-StoredCredential -Target LogicMonitor $accessID = $cred.UserName $accessKey = $Cred.GetNetworkCredential().Password } #region Get collectors $resourcePath = "/setting/collector/collectors" $collectors = Get-LMRestAPIObjectListing ` -resourcePathRoot $resourcePath ` -accessKey $accessKey ` -accessId $accessID ` -URLBase $URLBase #endregion As always, code provided without warranty by myself nor Beyond Impact 2.0, LLC, use with caution. Just change the resource path to the piece you want and it'll dump the whole list to your variable... in this case, $collectors.Cole_McDonald6 years agoProfessor3Views0likes1CommentIs two factor authentication down?
We are unable to login using two factor auth this monring (0800h) in UK. We get a 500 error when users try to get a code (any method, Authy, SMS, phonecall). LogicMonitor status page shows all greens. @Sarah Terry Noticed this though from yesterday: LogicMonitor is currently investigating potential impacts to our Account Access Component(s). Resolved- This incident has been resolved. Sep 10 , 16:53 CDT Investigating- LogicMonitor is currently investigating technical abnormalities, which may be impacting customer accounts. We will update once we have further information on the full scope of impact. Sep 10 , 16:33 CDTMosh6 years agoProfessor0Views0likes5Commentsadd step failure description to website alerts
If a step fails in a website check, the step description should be produced in the alert. I am very tired of fighting with the system to get it to do the correct/obvious thing and my clients find it ridiculous to have to dig around to know what is actually happening. Please make the computer do the work so we don't have to.mnagel6 years agoProfessor11Views2likes4CommentsLine graph of alerts
I'd like a line graph to show alerts over time. In order of priority I would want to easily see specified groups of devices, then by device, and then by instance. This would greatly assist in identifying trends. This post hints at a cumbersome workaround, but the ability to see number of alerts over time is a basic necessity and should be easy to accomplish. https://communities.logicmonitor.com/topic/732-number-of-alerts-on-dashboard/ Ideally this would just be an eventsource or a datasource which could be easily applied to any group whether it be website, resource or device.1View1like4CommentsSNMP Multi credential setter
Allows you to set up to 10 SNMPv2 or v3 credentials and it will automatically detect which one works and set the properties on that device. Similar to Antony's but also works on SNMPv3 with it's 5 properties. Locator code:CA7ETF (pending approval as of Aug. 29, 2019) Full details at http://blog.MikeSuding.comMike_Suding6 years agoFormer Employee5Views0likes1Commentadd "related to" list for resources
We find at times the need to monitor usage on one device interface but show traffic information from another source. For example, we may get a utilization alarm from the physical crossconnect on an external switch to the ISP, but we have no useful traffic data (or no data) on that switch. The next step would be to go to traffic details on downstream devices, like firewalls. It would be helpful to have a "Related To" URL list available to avoid manual navigation each time. Ideally, this would be in the UI and available in alert tokens.mnagel6 years agoProfessor0Views0likes0CommentsAbility to change the icons shown in a Table Widget
We are using table widgets in dashboards that we have given our customer the ability to see, and they have asked if it is possible to use different icons for different devices that are being monitored? For example, we would use one icon for networking devices, another for servers, etc. Does this feature exist? Thanks, Ernie Data Blue1View0likes1Commentfix command-by-email formatting or handling
The current command-by-mail (when allowed, which is ONLY with the builtin mail transport) is a bit misleading especially to those not already familiar with LM. You may reply to this alert with these commands: - ACK (comment) - acknowledge alert - NEXT - escalate to next contact - SDT X - schedule downtime for this alert on this host for X hours. - SDT datasource X - SDT for all instances of the datasource on this host for X hours - SDT host X - SDT for entire host for X hours I had a customer literally put in: - ACK still working with century link because, well, that is what it says to do. Please fix so it is more clear, or fix the response handler to account for this use case. As always, the computer should be doing the work here, not offloading to busy humans.mnagel6 years agoProfessor3Views0likes3CommentsAdd option to hide Alert Tuning and Info pages
Please add options to universally hide Alert Tuning and Info pages when viewing a object in the Resources tree. We want to provide users with a highly simplified Resource tree view (Graphs, Alerts and SDTs only) andwe do not want them to see these two pages.Mosh6 years agoProfessor2Views0likes0CommentsFast search for disabled Host Status check
LogicMonitor's Host Status DataSource is very important DataSource that provides notification of Host Down and we check that it's never disabled. It's a common thing I see were someone "just wants to check ping" and disabled all other DataSources including Host Status without realizing it's importance. I've had a script that checks every device in the system in a loop and verifies none of them are disabled. It basically runs a REST call /device/devices/{ID}/devicedatasources?filter=dataSourceName:HostStatus for each device and checks alertDisableStatus and stopMonitoring. With thousands of devices this takes a long time and I'm wondering if anyone has suggestions on how to get this information any quicker? Some way to query multiple/all devices at once perhaps? I tried looking at from the DataSource side (/setting/datasources/) but it doesn't provide that information. Thanks!Mike_Moniz6 years agoProfessor10Views0likes5CommentsURGENT: Please extend Collector GlobalStats DS to report all billable elements
Right now, we get only Resource counts from the LogicMonitor_Collector_GlobalStats datasource. We need to be able to show our clients their usage on ALL chargeable elements, including Website checks, LMCloud, LMConfig, etc. I have cobbled together something via the API to try to track this offline, but we need to clearly show clients what they have via a dashboard widget and right now the only one we can show is Devices (Resources). At the same time, please setup a way to define inputs to that via standard properties that indicates a client's subscription level, is possible. We hack around this now with a datasource that pulls in a property defined at their top-level folder.mnagel6 years agoProfessor0Views0likes0CommentsTopology Mapping
Hello, I am attempting to check out the Mapping tab and i am able to add a resource but there is nothing in the outgoing or incoming edges. I am also not seeing any way to add these manually. The other thing that i am getting hung up on is the ability to add and manage ERI's and ERT's where is this in the settings tab? Any help would be greatly appreciated. Thanks in Advance!9Views0likes3CommentsAlerting on incorrect SNMP Community Strings and values
I'd like to see an automated system in which LogicMonitor tests the associated SNMP community strings against hosts automatically and reports back if there is an issue (the string is wrong, or has been changed - aka is wrong). This would be helpful in large environments where strings may be controlled by different groups and may be changed as part of an internal security process or implementation of another monitoring platform. Additionally, if you add a host and the string is wrong at that point, it's up to you to remember that you need to go back in and correct this situation. It would be awesome to have the system continue to test and alert on the snmp.community string as it does when you add the host - the logic must already be there, but it only happens when you add the host the first time via the Wizard. I currently have several hosts in a portal with incorrect SNMP strings and there are 0 alerts on them, I could easily pass that off as "everything looks good here", but in reality I can't even reach the device. I need something (test/alert) that will verify that all snmp.community strings and other settings are correct, working, and I'm actually pulling data on everything that I'm expecting to in the system. This advanced feature would provide needed assurance in environments where mission critical systems rely on SNMP values to be accurate and reportable, and would be another strong selling point for your already robust and feature-rich product.3Views0likes2Commentsget, display and report on arbitrary text from SNMP.
I know I've mentioned this before, one of engineers just visited me and asked "is there anyway LogicMonitor can give us the software revisions in use on all the devices on a customer network"Ithink the answer is still no. Itseems we have a lot of the components in place, we know the the system OIDs for this data. we have an LM collector and all the customers 100s of routers added. But LM does not allow us to pull back and display arbitrary data from devices, with no interpretation. There is so much information we could potentially harvest from the products, Serial numbers, software version, product names and descriptions.0Views0likes5CommentsList of snmp.community strings
I would like at the group level to have a comma delimited list of SNMP strings, and have the device try each in order, and store the one that worked. We have some sites that have more than one string, and it's up to us to discover which is which for them as we on-board them. or they have a string + some devices with default of Public3Views0likes1CommentAPI params
I'm trying to get a subset of devices, but my filter isn't filtering... here's my path and query, anything obvious that I'm doing wrong here? $resourcePath = "/device/devices" $queryParam = "?size=1000&filter=CollectorId:'$($collector.id)'" $collector is previously gathered and the '$($collector.id)' resolves the way I expect it to ... '12' in one case. But it's getting every device (first 1000) rather than pre-filtering to members of a specific collector.Cole_McDonald6 years agoProfessor3Views0likes4CommentsSmoothing Datapoints
We have datapoints that are very spiky by nature. In order to see the signal through the noise so to speak we need to average like 10 datapoints together... effectively smoothing the data. For example if we took 1 minute polls of CPU Processor Queue or CPU ready we would want to plot the average of the past 10 datapoints. If anyone has suggestions on how to do this or how they approach datasets that are inherently to noisy for threshold based alerting I would love to hear about it.Solved11Views0likes4CommentsImprove Import DataSources from Repository
Over time you add new data sources and update existing ones. On the same time I am changing filters, applies to and other settings on the same data sourcesto fine tune them to my needs. In the import from LM repository list of changed data sources, I can see the difference in the data source but when I click import the imported data source overwrites my tweaks of the data source. which forces me to make a lot of manual work of writing on the side the existing settings (I often do not know what came from my tweaks and what as already built in) and then comparing these settings to the new imported data source Please add an option to merge the new data source and the current version and import the merged version13Views2likes8CommentsMonitoring Logoff/Logon Events for Anomalies
Background - We have a fairly large citrix environment(70 customers, 1200 users). Each customer has 1 or more xenapp servers depending on how many users. The environment is setup in a manner that often times the first step in troubleshooting is having the users logon/log off(which obviously creates an event id). We would like to plot the number of logon/logoffs(via event ids)per every 10 minute period and look for anomalies(periods of high logons/logoffs relative to normal or relative to number of users in environment). First step for us is simply plotting the data. Any ideas ideas on the best way to approach this problem. My initial thought is simply to write a powershell script to search for the eventids over the 10 minutes and return the number...then apply this to each xenapp server in logicmonitorbut maybe there is a better approach? I also don't know the best approach to aggregate by customer or evenfactor in the number of users...assuming we would need to export to excel to handle some of that. Ideas welcomed.Solved5Views0likes5Comments