Can I monitor a JSON file? Example included.
Hi, We have a script that runs and creates an output like the file attached. We need to be able to parse this file and look at the “replication” and “counts_match” fields and alert if we don’t find certain criteria. Can LM do that? I think that LM can only access files directly if they are on a collector, so we’d make sure this file ends up there. Thanks. I guess I can’t attach a file so here’s what it looks like: { "replication": [ { "db_name": "db1 ", "replication": "running ", "local_count": "12054251", "remote_count": "8951389", "counts_match": "false" }, { "db_name": "db2 ", "replication": "running ", "local_count": "0", "remote_count": "0", "counts_match": "true" }, { "db_name": "db3 ", "replication": "running ", "local_count": "0", "remote_count": "0", "counts_match": "true" }, { "db_name": "db4 ", "replication": "running ", "local_count": "97", "remote_count": "97", "counts_match": "true" }, { "db_name": "db5 ", "replication": "running ", "local_count": "0", "remote_count": "0", "counts_match": "true" } ] }Solved352Views12likes5Commentsimplement better data serialization for active discovery results
A few months ago after being told that SNMP_Network_Interfaces was the new preferred method for network interface data collection (despite it excluding SVI interfaces and using weird backward naming for the property to include all interfaces -- interface.filtering = true), we moved ahead with implementation. We found soon after that the module was corrupting interface descriptions, and a ticket raised with support resulted in the “too bad, sucks to be you” sort of response I often receive. The corruption involves stripping all hash signs from interface descriptions. This may sound harmless, but we have for years told our clients to prepend a # to an interface description to exclude it from alerting, which our automation detects and handles. The reason this happens is because the folks who came up with LM thought it would be a cool idea to use # as a field separator and this was embraced and extended by later developers. There was a solution we recommended (that was rejected) -- specify a transform string so the # becomes something else known we can match without breaking monitoring for thousands of interfaces. My request here is to work out a method to transition from the current field separator mechanism to an actual data serialization method like JSON.37Views5likes0CommentsJSON Path capability in a Webpage DataSource
I think the answer to this is gonna be “You need to script it, dummy”, but figured I’d check anyway... I'm working on a new DataSource that pulls/interprets JSON data from a peudo-custom system via HTTP. The system has a status page that lists the status of various components using JSON elements that have this general format: ParameterName&ParameterType Initial idea was that I could use the Webpage collector since it supports JSON/BSON parsing. Issue I’m running into is that the values on most of these JSON elements are string (i.e. “true”/”false”). I set up a DataPoint that can extract that value by putting in the JSON Path like so: $.[##WILDVALUE##].[ParameterName&ParameterType] ...and I can see that I’m getting the “true”/”false” values back when I do a Poll Now. But - as we know - LM won’t deal with strings natively. Workaround I came up with was to get length of the string since true/false are different lenghts. According to sources online, JSON Path should support a calculation of string length. I've also verified that I can do this by pasting my data and my JSON Path expression: $.[ComponentName].[ParameterName&ParameterType].length ...into https://jsonpath.com/ In this parser, the .length function works as expected, and returns the length of the JSON value. However, in LogicMonitor, I'm just getting this (example failure): NAN after processing with json (postProcessParam: $.[ComponentName].[ParameterName&ParameterType].length, reference=, useValue=body) Anyone know if there is a way to make this JSON Path length function work?Solved335Views4likes3CommentsCan a website check look for multiple occurrences of a JSON item?
Hi, Normally I have a website that looks through JSON and picks out a pair like Status = Complete. However, I now have a JSON result that includes multiple statuses in one file. I need to be able to parse through all the occurrences of “state” and verify that all of them are either “running” or “completed”. Is there any way to do this through the LM gui? I was told I might have to use a Groovy script, but I don’t know what that is or how it works (not a programmer). If anyone can guide me towards a solution for this, that’d be great. Thanks!45Views2likes0CommentsThree hosts, one service check?
Hi, I have a datasource that uses a regex to pull values out of some JSON and graphs numerous values. This runs onmultiple hosts and the host goes critical if a particular value hits 0. I don't want the individual servers to go critical, I'd like a service/website check to go critical if two out of three sites return the zero value in their JSON response. I can write a datasource to check the JSON from each site but then I have to apply it to a host, or multiple hosts, and that defeats the point as I don't care if a single one goes away. Is it possible to create a website check that takes in the JSON, extracts the data required, then passes it on to be used in another step which checks the next site, can I alert on values within the response? Is this even the right way to do it? It would be great if a datasource could be run from a collector group rather than tied to a specific host!21Views0likes2Commentsview integration strings
Hello, We are using custom HTTP Integrations towards Servicedesk Plus from Manage Engine. When creating these we need to know what information we are sending to our ticket system so that we are able to see what we need to change to make it work. What we need is some tool to view the complete request we send to Servicedesk Plus. That would be a great help for us. When we test our integrations we just get the answer from the receiving system. Is it possible to have that kind of solution?6Views0likes0CommentsJSONPath Expression Filter support for Web Service Checks
Support confirmed that currently JSONPath filter expressions are not supported in Web Services checks. So this is going to be JSONPath expressions that contain: ?() This functionality is supported by Device Datasources, so I can't imagine this is too difficult to implement for Services.12Views0likes1CommentNormal Datapoints: Allow JSON reponses to dynamically populate Name and Post Processor values.
While working on optimizing Powershell scripts for Logic Monitor, we found out that Active Discovery was great for some applications.However, when it came Powershell invoking commands(running scripts on servers), we found that Active Discovery has thepotential to generate too many connections to servers. The answer we arrived at was doing everything in one script and returning it all in a JSON response. This worked significantly better than the dynamic Active Discovery, but had one draw back. The data points had to be manually entered. My suggestion is that Logic Monitor modify the data points to allow reference to the JSON responses. Meaning, that we would set one instance of a Data Point with a Name field that indicates the JSON path to an array with all of the Instancesand the Post Process could be pointed to the corresponding JSON path for the Value of each instance. JSONExample: [ { "Title":"Name of an Instance", "Value":1 }, { "Title":"Name of another Instance", "Value": 2000 } ] DataPoint would look something like this: Name Metric Type Raw Metric Post Processor Alert Threshold json(Title ) guage output json(Value ) != 1 Results would create instances like this on a graph as you would if you type them out normal: "Name of an Instance": 1 "Name of another Instance": 2000 I believe this would be more efficient and allow us to be still dynamic. Thanks, Jason Wainwright7Views0likes4CommentsPowershell results - working with son
Hi everyone, I'm trying to get some results off a Simplivity system using power shell and json. I have the powershelgl working as I would expect, but I can't get the results to show up in LogicMonitor. In collector debug, I see this when running the script. >!posh returns 0 output: [ { "id": "53466cdf-ee82-435e-8619-334b3a3e7583", "name": "Fakename", "type": "OMNISTACK", "members": [ "4230cce1-f672-e708-0ed6-3310d6db8222", "4230e3c6-4cf6-7228-fc86-cbce8cfa4af7", "564dcac8-b774-c644-cb22-e63acfd07cb9" ], "arbiter_connected": true, "arbiter_address": "10.1.1.6", "stored_compressed_data": 731283120128, "compression_ratio": "1.8 : 1", "deduplication_ratio": "300.9 : 1", "free_space": 14779864866201, "capacity_savings": 387454100800512, "efficiency_ratio": "530.8 : 1", "local_backup_capacity": 385825262157824, "used_logical_capacity": 388185382895616, "remote_backup_capacity": 0, "allocated_capacity": 15511146961305, "stored_virtual_machine_data": 2360120737792, "stored_uncompressed_data": 1290225192960, "used_capacity": 731282095104 } ] I've attached example of what I've tried for output. Nothing is showing up.4Views0likes2Comments