In this series of posts, I run through the process of aggregating logs with Wildfly, Filebeat, ElasticSearch and Kibana. In Log Aggregation - ElasticSearch & Kibana I went through creating the ElasticSearch domain but now we need to create some logs to play with. For this post, I setup a simple Wildfly instance with a basic Java application to produce some logs. I have provisioned a simple EC2 instance with Wildfly and deployed a really simple application to it.
In this series of posts, I run through the process of aggregating logs with Wildfly, Filebeat, ElasticSearch and Kibana. In this post, I setup an ElasticSearch domain, which also comes with Kibana. AWS have provided ElasticSearch and Kibana as a managed service, known as Amazon Elasticsearch Service, which takes care of the infrastructure by managing failed nodes etc, meaning a significant chunk of the complexity is taken out of it.
When transforming a payload from one form to another it’s often necessary to map various fields. An example of this may be as simple as mapping from a country code to the full name of said country, i.e. AUS to Australia. There several ways to achieve this, some more flexible than others. To use the example of country code mapping, here’s the input data that we’re required to map/transform:
This post is the next in the series on Cloudifying MoinMoin and Scheduling an ASG, here we Update the DNS record automatically upon the provisioning of an instance. Now that we’ve got everything up and running on the schedule that we want, and being automatically replaced upon failure, we need to ensure our domain alias A record i.e. wiki.easyas.info gets updated with the IP of each new instance as they’re provisioned.