Unmounting Samba shares after network disconnect

Recently I had some issues with my network switch, and because of that my home server which has some shares from NAS mounted started to work funky. I could not use the mounted shares, the load skyrocketed because of the IO wait, and I couldn’t unmount the share. After a moment of tinkering it appeared that I needed to add -l flag to mount command. So, to unmount the network share when it’s not accessible you can use this command:

umount -l /path/to/share

This flag means that umount will work in lazy mode, when the filesystem is detached immediately, but all the cleanup will be done when possible.

You can also use the following command to unmount all CIFS/Samba shares.

umount -a -t cifs -l

“Unknown job” errors when installing vmware tools in Ubuntu

Recently I’ve tried to install new vmware tools in Ubuntu 14.04.1 after upgrading VMware Workstation to version 10, and the build process crashed with errors like Unknown job: vmware-tools or Unknown job: vmware-tools-thinprint

After some debugging, I’ve found out that the problem was due to the fact that I was running the build process using sudo -s -E command, which gave me administrator rights, but kept some environment variables that messed initscripts-related commands (service, status and other from upstart package), on which build process relies. The solution is to build vmware-tools under clean environment for root user. To do that, gain superuser rights with sudo su - command, and then build vmware-tools as usual.

The tip above applies to general “unknown job” problem with starting/restarting services, not only to the vmware-tools build process.

Apache sites not working after upgrade to Ubuntu 13.10

With new Ubuntu 13.10 Apache has new naming scheme for sites configuration files. Because of this your virtual servers might stop working, showing default site, which is “It works!” page, or whatever you have configured for your web server.

To fix the problem, you have to delete all old links in /etc/apache2/sites-enabled (maybe keeping 000-default.conf if you like), rename all site configuration files, residing in /etc/apache2/sites-available, to have .conf postfix, and then enable it again by manually creating a symbolic link to sites-enabled, or using a2ensite tool.

You can also use this simple shell script:

for i in `ls -1 /etc/apache2/sites-available | grep -v -e '.dpkg' -e '.conf$'`
do
   rm /etc/apache2/sites-enabled/$i
   mv /etc/apache2/sites-available/$i /etc/apache2/sites-available/$i.conf
   a2ensite $i
done

That should do the trick. Note that a2ensite accepts only the site name, without .conf postfix.

[redmine] “Email delivery is not configured” error

If you get this error on Redmine‘s email notification configuration page:

Email delivery is not configured, and notifications are disabled. Configure your SMTP server in config/email.yml and restart the application to enable them.

you did configure config/email.yml file, and you did restart the application, and this message is still showing in the administration panel, you can try placing the email.yml file in other directory. For me, it helped when I placed it in /etc/redmine/default/, where Debian keeps YAML configuration files for Redmine installation instances. If you run multiple instances on one host, you may have to change default subdirectory name to the instance name.
I’m writing this post because while the solution is trivial, I’ve noticed that many questions about this error on Redmine’s forum remains unanswered.

[Linux] PHP not working in userdir (public_html)

Today I wanted to give my users possibility to test their PHP scripts, but without all the fuss with creating virtual hosts for each one of them. My first and obvious choice was userdir – user creates public_html directory in his home dir, puts there files, and those files are accessible via http://servername/~username/ URL. To enable this behavior you only have to enable userdir module (a2enmod userdir), and remember to set correct permissions to the userdir (chmod +x $HOME) and public_html (chmod 755 $HOME/public_html). I did this, and everything was working fine, except PHP scripts – browser wanted to download them instead of displaying proper processed content. It appeared that apache in Debian has by default PHP disabled for userdirs. To enable scripting in this dirctory, open file /etc/apache2/mods-enabled/php5.conf, find that piece of code:

    <IfModule mod_userdir.c>
        <Directory /home/*/public_html>
            php_admin_value engine Off
        </Directory>
    </IfModule>

and disable it, either by deleting or by commenting it out (precede each line with # sign). You can also change php_admin_value engine setting to On, but if you do that, you will be unable to turn off PHP engine in .htaccess files.

[linux] Upgrading Debian Lenny to Testing halts on udev package

Few days ago I wanted to create VirtualBox image of Debian Squeeze (current testing release). I already had Debian Lenny (stable) image, so the whole process seemed relatively easy – dist-upgrade Lenny, switch to “testing” apt sources, then dist-upgrade. It appeared easy, but in fact there were some bumps on the road. Dist-upgrade broke on “udev” package. From what I’ve understood, udev didn’t want to be upgraded while working with old kernel, and kernel wouldn’t be upgraded because of some unmet dependencies. At first it seemed that the upgrade upgrade process broke down the dependency system and the easiest way was to install the system from scratch. But udev package has an “emergency rope”. When you pull it, you promise udev that before the next reboot you will upgrade kernel – and udev will believe you and install. To use that rope, create an empty file /etc/udev/kernel-upgrade and then manually install the udev package, which already resides in /var/cache/apt/archives directory. After this you can continue dist-upgrade, using -f switch. So the whole set of commands to upgrade the distribution is:

apt-get update
apt-get dist-upgrade
cat > /etc/apt/sources.list << ENDF
deb http://ftp.us.debian.org/debian/ testing main
deb-src http://ftp.us.debian.org/debian/ testing main

deb http://security.debian.org/ testing/updates main contrib
deb-src http://security.debian.org/ testing/updates main contrib
ENDF

apt-get update
apt-get dist-upgrade
touch /etc/udev/kernel-upgrade
dpkg -i  /var/cache/apt/archives/udev_151-3_i386.deb
apt-get dist-upgrade -f

After that upgrade should finish. But be warned – after switching from lenny to squeeze I’ve experienced some problems like kernel not booting (I run previous version and regenerated initrd – it, or something else, helped), problems with VirtualBox guest additions (had to install kernel sources for guest additions to build themselves, then reinstalled virtualbox-ose-dkms package, and then reinstalled guest additions from the CD image attached to VirtualBox).

Upgraded system needs a few touches, but that’s not unusual. Don’t worry about “udevd : SYSFS{}= will be removed in a future udev version, please use ATTR{}= to match the event device” and similar messages while booting – it will be silenced in future versions of some packages. You can fix it if you want by replacing SYSFS with ATTR in /etc/udev/rules.d/* files, but other similar warnings will appear. You can also run upgrade-from-grub-legacy script, if you chose to chainload new grub from the old one during the upgrade. There is also significant amount of files in /var/cache/apt/archives, which can be safely deleted after successful upgrade (in my case it was 2.5GB of data).

Phew, now everything works and I have a proper VirtualBox image of Debian Squeeze.

[linux] Efficiently copy whole directory trees

Recently I’ve upgraded my A/V storage disk from 1TB to 2TB (size of modern HDDs are amazing…), and I had to transfer data from one disk to another in some way. Easiest was of course to simply use cp to copy the whole directory structure. But as USB interface is slow enough, I wanted to employ any possible method to speed things up.

From the old times as a sysadmin I’ve remembered that copying, doesn’t matter local or over the network, was faster if data was copied in chunks – that was true especially when having a lot of small files.

So, to kill two birds with one stone, I employed… tar. Yeah, it’s a very versatile tool – in some time I’ll describe how to use it as a part of a network encrypted backup solution. But here, I used only two instances of tar. You have to remember that if you specify absolute path to the directory you want to transfer, tar will record all elements of the path, only stripping “/” from the beginning. So there are two ways: either record whole path and strip it when “decompressing”, or not to include it at all.

Lets assume that we’re copying data from /media/external0 to /media/external1.

First option:

cd /media/external1
tar cf - /media/external0 | tar x -f - --strip-components=2

Second option:

cd /media/external0
tar cf - * | tar x -C /media/external1 -f -

The first tar command “compresses” the data (either from given directory or from the current one), sends it to the standard output, which goes into the second tar’s input, which decompresses it into the target directory.

To see the progress, you can put cpipe application in the middle like that:

tar cf - /media/external0 | cpipe -vt | tar x -f - --strip-components=2

One note on an improved performance using this method. I didn’t do any benchmarks, but it shouldn’t be worse than a standard cp. tar will be better if the OS is not using any smart pre-buffering. Also I’ve decided to write this entry as an introduction to a future one about over-the-net backup using tar

Operating system on a memory card

MicroSD cardWindows XP on a netbook works fine with me, but as a person with Linux background it seemed natural to try some distribution on my Wind. As I wanted to try it first, I’ve decided to user some live distribution first. I don’t have an external CD device, but on a previous occasion I’ve managed to cope with bootable USB drives, so it was a natural (and only) choice. I was just about to re-format pendrive I’ve used to boot Linux from, but then I remembered that I have couple of loose 2GB microSD cards I’ve bought just because they were cheap, and an USB card reader. Idea of having an operating system on a so tiny memory card was very appealing!
Continue reading Operating system on a memory card

SVN problems in Debian Squeeze/testing

If you get error messages like “svn: OPTIONS of ‘http://svn.example.com/svn/module’: could not connect to server (http://svn.example.com)” in Debian Squeeze/testing, you probably have the same problem as I. It appears that recently a broken neon library was transferred to debian/testing repository, what broke subversion client functionality. There are few options.
Continue reading SVN problems in Debian Squeeze/testing

Private FTP server

I wanted to share few files from my home server computer, so I’ve decided to run FTP server on that box. I’ve chosen not to use SFTP/SCP because I don’t like the way progress reporting is handled – progress bars are updated only after quite large chunk of data are transferred, otherwise dialogs are frozen. FTP is quite robust, and there is plenty of client software. I myself use built-in FTP feature of Total Commander.
Continue reading Private FTP server