Archive for the ‘Linux’ Category

A very restricted shell

I was recently asked to set up access for someone so that he could connect from the internet to a machine running on our company network. Securing the machine on the company network was easy enough but I needed to create a route through to it.
The approach I took was to build a CentOS 6 machine running sshd which was accessible via a NAT rule on our outside firewall to act as a gateway to our network. This meant that the computer to which the user wanted to connect wasn’t directly accessible via the internet which made me happier.

The user had to forward a local port to the required port on the computer he wanted to access on our network. In other words, conventional ssh tunnelling and port forwarding. As a further security restriction, on the gateway computer I used the local iptables firewall to restrict not just the inbound traffic, but the outbound as well, only opening the ports that were needed.

I also decided I wanted to restrict the shell that the user was given on the computer he connected to via ssh. The restricted bash option seemed reasonable (i.e. bash -r option) but even that allowed more than was needed. My solution was to write a very simple c program that acted as the most restricted shell I could think of, i.e. it understands one command only – “exit”.

File: smallsh.c

#include <stdio.h>
#include <string.h>

/* Very simple command shell. Only supports one command "exit". */
/* Output is inelegant if the user types more than 60 chars as command */

/* Version 1.0 14/10/2014 */

main()
{
  const char cmdEXIT[5] = "exit\n";
  char userCmd[60];
  printf ("Type exit when done\n$ ");
  while (fgets(userCmd, sizeof userCmd, stdin)) {
    if ( ! strncmp( cmdEXIT, userCmd, 5 ) ) {
      return;
    } else {
      printf ("I only understand 'exit'\n$ ");
    }
  }
}

When this was compiled, I moved the result (smallsh) to /usr/local/bin. The program is simple, and if the user types more than 60 characters in a single command at the input prompt the output doesn’t look elegant, but I, and he, can live with that!

The suitable user account was created on the gateway machine with:

useradd -m -d /home/<username> -s /usr/local/bin/smallsh <username>

The sshd_config file then needed the following settings changed from the default and the sshd service needed restarting.

PermitTunnel yes
AllowUsers <usernname>
PermitRootLogin no

(OK, I’ll accept that “PermitRootLogin no” is not needed, but as far as I’m concerned it is).

Installing IBM Data Studio Version 3.1.1 with Firefox 10 and above on Linux

The recent decision by the developers of Firefox to increase the major version number at an extraordinary rate has, I’m sure, caused problems for lots of developers trying to test for compatible versions. At my employer I know the developers now simply test for old versions rather than new ones.

IBM seem to have fallen foul of the changes too. I’ve just tried to install IBM Data Studio Version 3.1.1 on my desktop which runs a fairly standard CentOS 5.8. CentOS 5.8 comes with Firefox 10.0.4 at present. This is reasonably up to date so I didn’t imagine there would be issues so long as I got past any RHEL vs. CentOS tests in the installer. In fact, the whole install process broke very early. After much digging through the nested installation scripts, I found that the installer knew I was running Firefox but the version test reads:

supportedFirefoxVersion()
{
case "$*" in
*Firefox\ [1-9].*) return 0;;
*Firefox/[1-9].*) return 0;;
*Firefox*) return 1;;
*rv:1.[7-9]*) return 0;;
*rv:[2-9].*) return 0;;
*rv:*) return 1;;
Mozilla*\ 1.[7-9]*) return 0;;
Mozilla*\ [2-9].[0-9]*) return 0;;
*) return 1;;
esac
}

The problem of course is that this works for Firefox Version 1 through to 9, but fails for Version 10. I didn’t believe that Version 1 is OK but Version 10 isn’t!

If you hit this issue, expand the installer zip file and look for a file called <installerdir>/launchpad/browser.sh and edit the above subroutine. I simply changed it to allow any Firefox version:

supportedFirefoxVersion()
{
case "$*" in
*Firefox\ [1-9].*) return 0;;
*Firefox/[1-9].*) return 0;;
*Firefox*) return 0;; #### this line was changed
*rv:1.[7-9]*) return 0;;
*rv:[2-9].*) return 0;;
*rv:*) return 1;;
Mozilla*\ 1.[7-9]*) return 0;;
Mozilla*\ [2-9].[0-9]*) return 0;;
*) return 1;;
esac
}

The installation then runs through successfully.

Loss of network interfaces after applying CentOS kernel 2.6.18-308.4.1 on ESXi 4.1 Update 2

One of our hosting company requires us to keep up to date with RHEL 5 kernels on a server they support. As this is a production machine it means that when a new kernel is available I start applying it to development machines and migrate the upgrade through to production over a period of 3-4 weeks.

 

The first stage of testing is to apply the kernel on various ESXi hosted VMs and a couple of real servers, most of which run CentOS 5. The most recent kernel 2.6.18-308.4.1 seems to work OK, but on all of the ESXi hosts there seems to be an issue with the virtual network interfaces. In all cases I use the VMXNET 3 adapter type. The hosts are running ESXi 4.1 Update 2 with a number of HP provided bundles.

 

After the application of the updates and the subsequent reboot, only the local loopback network interface starts up. I’ve seen various suggestions as to what may be the cause but I can’t comment on those. To fix the issue, my initial test was to re-install the VMWare Tools which seems to do the trick. Having got that far, on a different guest I tried just rerunning vmware-config-tools.pl. That seems to solve the problem immediately (just restart the network service). The fix appears to be persistent across a reboot.

Don’t ignore log files until things break….

I look after a number of services that continuously generate log files. Much of the content is difficult to make sense of and the size of the files make them impossible to review in detail every day. Of course, for the most part they get ignored until something goes wrong. Then you just hope there’s something in there to give you a clue as to what the problem is.

As an example, I am the DBA for several DB2 instances running on RHEL 5. These generate large diagnostics logs (db2diag.log) that occasionally contain records that I really need to know about. There’s a much shorter db2 notification log but I really prefer to track the lower level messages. The problem is seeing the “wood from the trees”, i.e. finding the unusual messages I need to worry about in amongst those I can safely ignore.

There is another more insidious problem. Some  messages indicate events that are OK from time to time but a sudden surge in their frequency indicates something is wrong. Simply identifying such messages as ‘noise’ and then excluding them (we all love “grep -v” for this) might be a very bad thing to do.

DB2 comes with a reasonably useful command (db2diag) that allows filtering of the diagnostics log, but there were two issues with it for me. Firstly, on RHEL 5 for a long time the version supplied with DB2 V9.7 lost the ability to pipe input to the command. This I see has been fixed as of Fix Pack 5, but my second issue is that I also wanted some way of filtering and counting all of the standard messages I got, leaving the unusual ones to be shown in full.

My solution was to write my own log file parser. The process of writing the parser proved to be very worthwhile. I learnt a great deal about the content of the messages and the different components that can generate them. I had studied the basics of the diagnostics log for a DB2 certification exam, but there’s nothing like working with the file for really getting a better understanding of it. My parser is nowhere near as complete as the db2diag command but it handles the messages I commonly get and simply reports in full any messages it doesn’t recognise.

In practise, the parser gets called every day under cron for each db2 instance. The process is as follows:

  • For messages the parser recognises, count the number of times each one occurs.
  • Report in full any message the parser doesn’t recognise.
  • Produce a summary report of recognised message counts at the end.
  • The whole report is emailed to me.
  • The parsed log file is then archived. A new log file gets created automatically and old ones are later deleted by a separate clean up task.

Most days there are no unknown messages so the report is very simple. In case things go badly wrong I have a cap of 50 unknown messages before the report stops writing them out – I don’t want a 100MB email with a 10 million occurrences of the same message – 50 is enough to tell me something has broken!

What isn’t in my parser, and probably should be, is the ability to indicate that a particular message has appeared an unusual number of times. In truth, I’m so familiar with my own reports that I have a good idea of what to expect. However, if a number of people are supporting your system then this would probably be a good addition.

The parser means that, in effect, I read the whole of the db2 diagnostics logs from several DB2 instances every day and I do it in a matter of a few seconds. The emails containing the report get saved and take up very little disk space. When unusual messages get generated they are very obvious and I can decide if I need to do something about them. These can be an early warning of a problem that is going to become a very big problem later.

A typical report looks like this (“Unfiltered” messages are ones shown in full):

Message processing completed
============================
Message timestamps range from 2012-03-27-03.00.02 to 2012-03-28-03.00.01
Messages found:        4327
Messages unfiltered:      0
Messages filtered:     4327

Filtered Message Analysis
-------------------------
Message Type                            Occurred
0 - Info                                     152
1 - Event                                   3432
2 - Health Monitor, Runstats                   3
2 - Load starting                            124
2 - Utilities Note                           120
2 - Utility phase complete                   496

It’s not just DB2 that I apply this process to. The ESXi hosts I look after are configured to send their syslogs to remote Linux servers. A similar script parses these. In the case of ESXi, not only do I look for unusual messages, but I get to see that regular jobs have run e.g. auto-backup.sh (every hour) and tmpwatch (every day) .