Pickup LogEvents Service Logging Agent
Enables end to end tracking by fetching your custom coded LogEvents asynchronously for integration platforms/solutions where Nodinite can't extract logged data from automatically
The Nodinite Pickup LogEvents Service Logging Agent asynchronously fetches your custom coded LogEvents from many different sources (intermediate storage) which means you will get less code and reliable logging in your custom built solutions. This logging pattern typically uses custom code in various system integration solutions and enables true cross platform logging, end to end.
Your mission whenbuilt-in tracking does not exist in your message broker/solution is to produce a json formatted log event and place it on a highly available intermediate storage (source).
Some real life examples below:
- Mule - Custom connector
- IBM Sterling - custom code
- IBM Cloud - custom code and log for example to PostgreSQL database instance(s)
- Java based solutions
- Azure functions using SeriLog
- C#/.NET platform
The Pickup LogEvents Service Logging Agent is not a Logging Agent with logic like other Log Agents, it reads Log Events from a source like a disk or a queue and then sends them to the Nodinite Log API (RESTful).
The logging is internally then performed using a HTTP/HTTPS POST of a Log Event to
api/LogEvent/LogEvent. For high performance solutions on-premise there is also an option to bypass the Log API and write the Log Events using many threads directly to the active online Log Database.
The pickup Service fetches Log Events from the following sources:
|Source||Description||Recommended Monitoring Agent||External Link||Configuration|
|Disk / Folder||Fetch Log Events from file folders and SMB enabled shares||File Monitoring Agent||Configuration|
|ActiveMQ||Fetch Log Events from ActiveMQ queues||Message Queuing Agent||Apache NMS ActiveMQ||Configuration|
|MSMQ||Fetch Log Events from Microsoft MSMQ||Message Queuing Agent||Configuration|
|Azure ServiceBus||Fetch Log Events from Azure Service Bus||Message Queuing Agent||Azure Service Bus||Configuration|
|PostgreSQL||Fetch Log Events from PostgreSQL database instances||Database Monitoring Agent||PostgreSQL||Configuration|
|Event Hub||Fetch Log Events from EventHub||N/A||EventHub||Configuration|
|AnypointMQ||Fetch Log Events from Mulesoft Cloudhub AnypointMQ platform||Message Queuing Agent||AnypointMQ||Configuration|
Missing a source? please contact our support, email@example.com and we will build it for you
That would mean that your solution would have to deal with error handling to cope with occasions when Nodinite for various reasons is unavailable like:
- Network errors
- Windows Servers are not available pending/during a reboot (Restart / maintenance windows / security patches)
- Nodinite itself is being updated
- Full database disks, typically used for the Log Databases
- Security changes where services no longer work (due to accidental or changes by mistake)
Also, from an overload perspective there will be less stress on Nodinite itself since with this solution we can fetch the data with a controlled pace when we are online, available and in a healthy state.
Read more about the differences between Synchronous and Asynchronous messaging on this Wikipedia article