:::: MENU ::::

Posts Categorized / Linux

  • Jun 19 / 2015
  • 0
Linux

Restrict access by IP on specific page with HAProxy

Most of time, we are setting only one (or some) IP to be allowed to access to some pages or services. But it can also be necessary sometimes to restrict access of an URL only for one IP (if you are getting some attacks from a hacker for example on some webservice).

You can do that restriction much easily with HaProxy with the following rules:

acl network_restricted src IPADDR
acl restricted_page path_reg REGEX
block if restricted_page network_restricted
  • IPADDR: it will be the IP address you want to restrict access
  • REGEX: a regular expression matching the pages you want to restrict (for example: /mypage/v[1-9]{1}/webservice)

Find the full and official documentation (especially on ACL) here: http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#7

  • May 27 / 2015
  • 0
Linux

Rsyslog stops working after logrotate

You are using an RSyslog mechanism to send your logs to a centralized server, but as soon as the logrotate is executed on the server, no log is sent anymore?
I will explain here why this can happen and how to fix it properly.

In logrotate, when rotation is happening the old file is renamed and a new file is created. But some processes do not care about the filename change and keep their file descriptor on the older file (whatever the name/extension is). To avoid that kind of problem, you need to use copytruncate option in logrotate so this process can start writing to the new file.

Due to his previous option copytruncate, logrotate is working fine, but rsyslog doesn’t understand that the file has changed (truncated) and is acting like a tail -f command staying stuck on this truncated state and not sending logs anymore…

To avoid that, you have to specify a lastscript option when logrotate has been executed to renew the rsyslog spool and state of the file.

As an example, here is how a logrotate script could look like to be working fine in such a case:

/var/log/myapp/*
{
    copytruncate
    compress
    daily
    rotate 7
    notifempty
    missingok
    lastaction
            service rsyslog stop
            rm /var/spool/rsyslog/MyApp-*
            service rsyslog start
    endscript
}

The “rm /var/spool/rsyslog/MyApp-*” line has to be adapted depending on the InputFileStateFile name you used in your rsyslog configuration.

  • May 15 / 2015
  • 0
Linux

NPM – Warning “root” does not have permission to access the dev dir

If you want to install some packages using npm and you are getting this kind of error “Warning “root” does not have permission to access the dev dir“, that means that system is trying to compile some native libraries with a wrong user and so lead to an unability to gain access on certain directories.

To fix this, just use this command instead (in this example, I’m installing ‘sails‘):

sudo npm install --unsafe-perm --verbose -g sails

It should do the trick 😉

  • Apr 27 / 2015
  • 0
Linux

Using strace with multiples PIDs

For debugging purposes, it’s sometimes necessary to debug multiples PIDs at a same time with strace tool.

I will take a simple example: PHP-FPM. PHP-FPM is creating several processes depending on its needs, and if you want to perform debugging on it, you can’t easily know what each process is doing. In order to get the results of all the PIDs created for php-fpm, you can use the following command:

strace -tt -T $(pidof 'php-fpm: pool www' | sed 's/\([0-9]*\)/\-p \1/g')

In this command, you can see:

  • -tt” option: displays a more precise time on each line (with microseconds)
  • -T” option: show the time spent in the call
  • pidof ‘php-fpm: pool www’“: retrieves all the PIDs of processes called “php-fpm: pool www” (you can adapt it depending on your process name)

Thanks to this command, you will get the strace result for all your PHP-FPM processes (you can filter them later thanks to PID displayed at the beginning of each line).

  • Apr 16 / 2015
  • 0
Linux

Generating core dumps for PHP-FPM

When you are getting some errors from PHP-FPM like “signal 11 (core dumped)” in your logs, you can need to generate some core dumps to understand what’s happening.

Install packages

You first need to install some packages to allow you generating dumps:

apt-get install gdb php5-dbg

System core updates

You will then need to update some sysctl parameters. Those commands will request root access to be executed.
Obviously, you can change the directory in which one you want to put the core dumps depending on your configuration (here /opt/core/ is used):

echo '/opt/core/core-%e.%p' > /proc/sys/kernel/core_pattern
echo 0 > /proc/sys/kernel/core_uses_pid
ulimit -c unlimited

You can use several patterns for naming your core dumps files:

%% a single % character
%p PID of dumped process
%u (numeric) real UID of dumped process
%g (numeric) real GID of dumped process
%s number of signal causing dump
%t time of dump, expressed as seconds since the Epoch, 1970-01-01 00:00:00 +0000 (UTC)
%h hostname (same as nodename returned by uname(2))
%e executable filename (without path prefix)
%E pathname of executable, with slashes ('/') replaced by exclamation marks ('!').
%c core file size soft resource limit of crashing process (since Linux 2.6.24)

Update PHP-FPM config

Once system configuration performed, you will need to update php-fpm configuration as well.
Edit file /etc/php5/fpm/php-fpm.conf (or specific pool configuration file under pool.d directory) and uncomment following line:

rlimit_core = unlimited

Once done, restart php5-fpm service:

/etc/init.d/php5-fpm restart

Your core dumps will now be created in the folder you indicated at the beginning as soon as a new core dump will be generated.

Check your core dumps

Check your PHP-FPM logs, and if you see something like:

[01-Jan-2015 05:30:15] WARNING: [pool www] child 547934 exited on signal 11 (SIGSEGV - core dumped) after 410.674135 seconds from start

Go to the folder you chose for storing core dumps and you will see your core dump files:

# ls -l /opt/core/*
-rw------- 1 www-data www-data 124512498 Jan  1 05:30 /opt/core/core-php5-fpm.547934

Analyze a core dump

Once core dump generated, you will need to analyze this file to see why this core dump has been generated.
For that, you will need to use a standard tool called gdb.

Use this command line with the new file just generated to launch a debug shell and start analysis:

gdb /usr/sbin/php5-fpm /opt/core/core-php5-fpm.547934

Once the shell is launched you can use different commands to analyze output like:

  • backtrace: it will display the simple backtrace of core dump
    (gdb) bt
  • backtrace full: will display the full detailed backtrace
    (gdb) bt full

You will so get the backtrace of code that generated this core dump and be able to debug the application easily!

WARNING: Be careful to those core dumps which can be quite big and take lots of disk space very quickly if lots of core dumps are generated.

  • Mar 13 / 2015
  • 0
Linux

Send its logs to one server using rsyslog mechanisms (with ssl support)

Rysslog is allowing to send its logs from a file on a server to a remote server (or online service) that will collect and maintain all logs.

Assumed server configuration

We will here assume that you have a rsyslog server already running and listening on port 514 (TCP or UDP depending on your needs).
This server is reachable and will be available through this URL (used later in configuration): rsyslog.mycompany.com

Packages Installation

First of all, you have to install rsyslog packages rsyslog and rsyslog-gnutls (specific package for ssl support):

apt-get install rsyslog rsyslog-gnutls

Ensure also that you don’t have any firewall restrictions on your infrastructure that could restrict access from rsyslog.

Rsyslog configuration

So you can use SSL communication, you will first need to retrieve your server certificate. Let’s call it apimyrsyslog.crt
Put it under /etc/ssl/ folder and set its rights to 0644 so it can be read from anyone but not modified.

Once this done, edit a /etc/rsyslog.d/49-myrsyslog.conf file with such a configuration:

# Config for enabling file forwarding
$ModLoad imfile
$InputFilePollInterval 10
$PrivDropToGroup adm
$WorkDirectory /var/spool/rsyslog

# Input log file
$InputFileName LOG_FILEPATH
$InputFileTag APP_NAME
$InputFileStateFile APP_NAME
$InputFileSeverity notice
$InputRunFileMonitor
# Template and TLS configuration
$template LogFormat,"<%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% - - - %msg%\n"
$DefaultNetstreamDriverCAFile /etc/ssl/apimyrsyslog.crt
$ActionSendStreamDriver gtls
$ActionSendStreamDriverMode 1
$ActionSendStreamDriverAuthMode x509/name

Then, log action configuration is depending on rsyslog version:

  • If rsyslog version >=7.0
# Log action configuration
if $programname == 'APP_NAME' then {
    @@rsyslog.mycompany.com:514;LogFormat
    stop
}

The “@@” parameter means that you are using TCP. If you prefer UDP, replace with “@” (single).

  • Else if rsyslog version <7.0
# Log action configuration
if $programname == 'APP_NAME' then @@rsyslog.mycompany.com:514;LogFormat
& ~

Adapt it by changing those fields with correct values:

  • APP_NAME: unique name of log application (e.g. NGinx-Access)
  • LOG_FILEPATH: full path for log file (e.g. /var/log/nginx/access.log)

Rsyslog restart

Finally, you have to restart your rsyslog service so the logs can start to be sent.

service rsyslog restart

You should see within few minutes that a link has been established (you can check with netstat command), and the logs should start to be available on the server side (which is handling them depending on how you configured it, or if it’s an online service as they configured it).

Question ? Contact