facebook twitter youtube facebook facebook facebook

E-Mail : info@askmlabs.com

Phone : +1.215.353.8306

Latest Post

AWS Network Gateways - Introduction

Written By askMLabs on Tuesday, August 18, 2020 | 2:54 PM

Introduction to Oracle Regions, Availability Domains and Fault Domain.

Written By askMLabs on Tuesday, August 11, 2020 | 4:10 PM

 This video helps you to understanding the 

  • Oracle Regions 
  • Availability Domains 
  • Fault Domains. 

It will also talk about High availability in Fault domains and Availability Domains.






Oracle Cloud Services Availability Link :

https://www.oracle.com/cloud/data-regions.html

Documentations is available here :

https://docs.cloud.oracle.com/en-us/iaas/Content/General/Concepts/regions.htm


If you have any questions, please let me know in the comments. I will respond as soon as I can.

Thanks
SRI.


Introduction to the OCI Network Gateways -- Oracle Cloud Networking

Written By askMLabs on Tuesday, August 4, 2020 | 11:13 AM

This is a short video to give you basic idea of different gateways available in Oracle cloud networking. Following gateways are explained in this video...

  • Internet Gateway
  • NAT Gateway
  • DRG ( Dynamic Routing Gateway )
  • Service Gateway


If you have any questions, please let me know in the comments. I will respond as soon as I can.


Thanks
SRI

Control the index data retention in Splunk - Purge and Claim disk space

Written By askMLabs on Sunday, August 2, 2020 | 12:03 PM

This article is about index data retention of Splunk. Splunk is a too that is used for searching, monitoring, and analyzing machine-generated big data via a Web-style interface. 

If you keep indexing your data, all the indexed data will be stored in index. It will keep increasing your disk space. At some point, you need to think of data retention to save the disk space. The default value of data retention is "188697600" seconds(Apprx 6 Yrs). Keeping the historical data depends on the use case. If you need 6 Yrs worth of historical data, you can keep the settings same and estimate the disk space requirements appropriately.

In our specific case, I don't need 6 years worth of indexed data. Becase I know the specific use case of our data, I can decide the retention of this data. Lets assume, I have to set the retention of the indexed data to 30days.  

30days = 30*24*60*60 seconds = 2592000 seconds

The configuration file where we can set default retention that applies to all the indexes in the Splunk is /opt/splunk/etc/system/default/indexes.conf. And the configuration parameter that controls the retention period is "frozenTimePeriodInSecs"


Following steps should help you setting this parameter.....

Check the disk space....( My splunk indexes are using  /opt/splunk for storing indexed data).


 [root@askmlabs-splunk01 ~]# df -h /opt/splunk
 Filesystem            Size  Used Avail Use% Mounted on
 /dev/mapper/data-data
                       493G  416G   52G  90% /opt/splunk
 [root@askmlabs-splunk01 ~]# 



Modify the parameter frozenTimePeriodInSecs in file /opt/splunk/etc/system/default/indexes.conf

NOTE : There are multiple places that this parameter is specified in the indexes.conf file. You need to modify the parameter under the section named  "index specific defaults".

[root@askmlabs-splunk01 default]# diff indexes.conf indexes.conf_bak
42c42
< frozenTimePeriodInSecs = 2592000
---
> frozenTimePeriodInSecs = 188697600
[root@askmlabs-splunk01 default]#


Now restart the Splunk instance to take the value effective....

[root@askmlabs-splunk01 default]# /opt/splunk/bin/splunk restart

Check if the disk space has been reduced by changing the retention....

 [root@askmlabs-splunk01 default]# df -h /opt/splunk
 Filesystem            Size  Used Avail Use% Mounted on
 /dev/mapper/data-data
                       493G  224G  244G  48% /opt/splunk
 [root@askmlabs-splunk01 default]#


Conclusion :
Splunk indexed data retention can be controlled using the parameter  frozenTimePeriodInSecs in the configuration file /opt/splunk/etc/system/default/indexes.conf. 

Hope this information helps you. Please post your questions in the comments section.


Thanks
SRI



5 steps to use logstash plugin for New Relic Logs

Written By askMLabs on Saturday, August 1, 2020 | 1:03 PM

In this article, we will see 5 easy steps to install logstash plugin for New Relic logs. You don't need to go through the complete documentation of Logstash and New Relic. Just follow these simple steps and your job is done.

Let's say, our requirement is to forward logs from any server to New Relic Logs.  New Relic Logs is a new kind of log monitoring solution. You can strip away the complexities of cumbersome on-prem log management and boost your business results with a cost-effective, cloud-based solution.If you would like to read more about New Relic Logs, Please use this link. Documentation is available here

New Relic Logs provides different ways to bring logs into New Relic. You can use following 3 options.

We are going to see the log forwarding plugin in this article. New Relic supports many log forwarders. Let us take logstash and go over the 5 easy steps to install logstash and forward logs to New Relic Logs. 
  1. Install Java
  2. Install Logstash
  3. Install New Relic Plugin for Logstash
  4. Configure New Relic Plugin
  5. Validate the installation
1. Install Java:
One of the pre-requisites for Logstash is to have correct version of Java installed. Logstash requires one of the java verions Java8 , Java11, Java14. I downloaded Java14 from Oracle site and installed using the following command.



NOTE : If you are installing java to any custom location, you need to link the java to your new location as below.

2. Install Logstash:

Logstash is installed using rpm. There are various methods to install logstash. We used rpm method for this setup. Please complete the following steps to install logstash.




3. Install New Relic Plugin for Logstash : 

Use the following steps to install New Relic Plugin for Logstash.

Verify if Plugin is installed. Go to directory /usr/share/logstash

Install plugin using....

Validate again if the plugin is installed....


4. Configure New Relic Plugin:

At this point, we have logstash installed and New Relic plugin for Logs is also installed. Now lets configure the plugin to forward logs to New Relic.


Above configuration is a simple configuration to send the logs to standard output. The content of the file specified for path will be sent to standard output. 

5. Validate the installation:

We have successfully done the installation of Logstash and New Relic Plugin. And also configured the plugin. We can now test the plugin function. As mentioned in the section above, we are sending the content of the logfile to standard output for validation. Once the validation is done, we can configure the actual configuration files to direct logs to New Relic Logs.

NOTE: At this point, the logstash is not started. When you install logstash using rpm method, it will not start automatically. But there is a OS service that will be created during installation. Please read the next section to know how to start logstash.

For the validation, Start the logstash using the configuration file that we have created "/u001/app/ag_software/askm.log" 

/usr/share/logstash/bin/logstash -f  /u001/app/ag_software/askm.log

NOTE : Above command will not return the prompt. 

Now we can populate the logfile with some content using the following command. Execute the below command from a different terminal.


Then you should expect the following output. If you see the following output, the configuration is successful.


Validating with New Relic Logs : 

We have successfully installed Logstash plugin and New Relic Plugin and then validated Logstash plugin to send logs to standard output. We have not tested sending logs to New Relic Logs. In order to send the logs New Relic Logs, 
  1. Create the configuration file in location /etc/logstash/logstash.conf 
  2. Start the logstash service which is created as part of installation.
  3. Send some text messages to the logfile configured in above configuration for path variable.
  4. Validate the logs in New Relic. 




Conclusion : 

We have successfully installed logstash and New Relic plugins and configured to send logs to New Relic Logs. The complete process is explained in simple easy 5 steps. 

Hope you enjoyed reading this article. 

Thanks
SRI




 
Support :
Copyright © 2013. askMLabs - All Rights Reserved
Proudly powered by Blogger