24 May 2019

Splunk Heavy Forwarder Deployment (Splunk HEC)

Splunk

Splunk Technical Add-ons  (TA’s)

Technical add-ons such as the Splunk AWS TA and the Microsoft Cloud Services TA are typically how Splunk has gathered data from the cloud in the past. However, as companies move towards scalable and highly available cloud architectures, these TAs do not hold up so well.

The official Splunk documentation lists a couple of reasons why these TAs cannot be deployed by a deployment server:

Supported for deploying unconfigured add-ons only.

  • Using a deployment server to deploy the configured add-on to multiple forwarders acting as data collectors causes duplication of data.
  • The add-on uses the credential vault to secure your credentials, and this credential management solution is incompatible with the deployment server.

These reasons can also be applied to explain why using these TAs is not an effective solution if you want high availability or scalability.

The TA must live on one box or you will likely be duplicating your data. This immediately leaves vertical scaling as your only option.

It also limits high availability for several reasons:

  • The app must live on one server
  • The inputs have “checkpoints” that dictate what data has already been sent. These need to be migrated across to a new server if the old breaks or needs replacing to avoid data duplication.
  • As you can see many of the TAs can’t be deployed because they use credential management, or some other custom methods. As the app is not supported to be deployed and installed automatically, this again limits high availability.

Splunk HTTP Event Collectors

Recently, Splunk architectures have been moving towards a more robust and scalable method of inputting their Splunk data. This method is a HTTP Events Collectors (HECs).

HECs use a http event listener endpoint with token authentication to collect data. This means the configuration on the Splunk side is minimal. You only need to create one input and its token. You may configure the default index and sourcetype but this can also be sent and overridden in the header of the http events.

To send data into a HEC endpoint you only need a method of sending HTTP events to an endpoint. The large example in AWS is kinesis firehose which can subscribe to log streams and fire directly at the HEC. This is great because now your data is being sent by a cloud managed service that is infinitely scalable.

So what makes the HEC endpoint so great?

  • The minimal config makes servers incredibly easy to set up
  • Easy to set up servers are great for an autoscaling group of virtual machines or containers. An autoscaling group gives you scalability and high availability
  • You can use a load balancer to have one single endpoint

Therefore, with this single Splunk endpoint you can collect all of your cloud data, or in fact, all of your data with the guarantee that your solution is scalable, highly available and where possible cloud managed.

Find out more from one of our consultants

 

Stay updated with the latest from Apto

Subscribe now to receive monthly updates on all things SIEM.

We'll never send spam or sell your data, see our privacy policy

See how we can build your digital capability,
call us on +44(0)845 226 3351 or send us an email…