Skip to main content

Posts

Showing posts from 2015

PowerShell Script for Switching Between Multiple Windows

Windows PowerShell has strong capabilities. I have a separate computer with a big lcd screen in which I am watching regularly some web based monitoring applications. So I need those application windows switch between on a timely basis. Then I wrote this simple powershell script to achieve this. You can change it according to your needs.

Feeding Active Print Jobs to Graphite

It's pretty obvious that lots of print jobs are running on your cups server. In my case there are more than one cups servers running behind a load balancer. So tracking their active jobs will work for you to check if they spread across on the servers smoothly.
Schema definitions for Whisper files in the storage-schemas.conf are expressed:
[print_server_stats]
pattern = ^print_stats.*
retentions = 1m:7d,30m:2y
Two retention policies are defined. One for short term (samples are stored once every minute for seven days) and the other for long term (samples are stored once every thirty minute for two years).
Following bash script is used for sending active jobs count to graphite in every cups print server:


Once you create the script it can be started as a background job from a shell terminal:
# ./feed_graphite.sh >/dev/null 2>&1 &

Linux Process Status Codes

In a Linux System, every process has a status expressed with the 'STAT' column in output of the 'ps' command. 'ps' command displays an uppercase letter for the process state.
Here are the different values for the output specifiers:
D    uninterruptible sleep (usually IO) R    running or runnable (on run queue) S    interruptible sleep (waiting for an event to complete) T    stopped, either by a job control signal or because it is being traced W    paging (not valid since the 2.6.xx kernel) X    dead (should never be seen) Z    defunct ("zombie") process, terminated but not reaped by its parent
for illustration, an example output of a 'ps' command:
$ ps -eo state,pid,user,cmd
S   1            root           /sbin/init
S   5274      root           smbd -F
D   4668     postgres     postgres: wal writer process
S   7282      root           nmbd -D
S   7349      root           /usr/sbin/winbindd -F
R   11676   postfix       cleanup -z -t unix -u
S   25354   _gra…

Using ssh-agent for Unattended Batch jobs with Ssh Key Passphrase

In some cases, It is needed to make ssh connections to another servers in order to run shell commands on them remotely. But when it comes to run these commands from a cron job, password interaction will be a concern. Using ssh key-pair with an empty passphrase may be an option but it is not recommended. There is another option automates passphrase interaction.
Ssh-agent provides a storage for unencrypted key because the most secure place to store a key is in program memory.
I am going to explain how to run batch/cron shell script integrated with ssh-agent:
There are two servers, server1 and server2.
On server1, ssh key pair is created.
# ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): <your passphrase here> Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: ........
On server2 copy content of the id_rsa.pub file from server1 and insert it to /roo…

Linux find command (exec vs xargs)

As a matter of fact, i detest having to learn more than one method to achieve a job when it comes to shell scripting. But most of the time, sysadmins should find their needs to be met in the best way.
Find has the -exec option to perform actions on the files that are found. It is a common way of deleting unnecessary files without xargs.
$ find . -name "*.tmp" -type f -exec rm -f {} \;
In the above example "{}" is safe to substitute for every file with a space in its name. But "rm" command is executed once for every single file that is found. If we think about tons of files to be removed then a lot of fork processes are likely to take place.
How about using xargs:
$ find . -name "*.tmp" -type f -print0 | xargs -0 -r rm -f
With xargs, "rm" will be executed once for all files, decreasing overhead of the fork. It would be safe to use "-print0" option for files with space. Xargs "-r" option is for not running if stdin is …

Randomly Generating User Passwords Using Ansible

First, i would like to note that i have recently started using Ansible for configuration management. One of the things i need in my server environment is to implement a user password changing policy. Since the targets are numerous, i have to use randomly generated passwords for each host. Because passwords are sensitive, they should be generated in the encrypted form. Ansible documentation recommends using python passlib library and SHA512 hashing here.

Ansible requires python-simplejson when Python version is 2.4

Ansible requires python-simplejson package when python version is 2.4.

192.168.1.21 | FAILED >> {
    "failed": true,
    "msg": "Error: ansible requires a json module, none found!OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: mux_client_request_session: master session id: 2\r\nShared connection to 192.168.1.21 closed.\r\n",
    "parsed": false
}


On Redhat 5, following package should be installed:

# rpm -ivh python-simplejson-2.0.9-8.el5.x86_64.rpm


Then from ansible:

# ansible server11 -m ping
192.168.1.21 | success >> {
    "changed": false,
    "ping": "pong"
}


Adding Bulk New Contacts to Microsoft Active Directory

Sometimes it is a pain for SysAdmins to add objects to Windows AD. In this example i have provided a visual basic script which reads information about some mail enabled contacts from a tab seperated text file, then create them in the Active Directory.

Every line of the text file includes:
Contact Name
First Name
Surname
Description
Office
Phone Number
E-Mail
City
Title
Department
Company

Graphite carbon-cache IOError with too many open files

When running carbon-cache daemon, clients are seeing errors such as connection refused by the daemon, a common reason for this is setting small number of file descriptors.
/var/log/carbon/console.log file, there may be
exceptions.IOError: [Errno 24] Too many open files:'/var/lib/graphite/whisper/systems/<host_name>/<metric_name>.wsp'
The number of files carbon-cache daemon can open should be increased. Many Linux systems set file descriptors to a maximum of 1024 as default. A value of 16384 may be good enough depending on how many clients are simultaneously connecting to the carbon-cache daemon.
In Linux, sysctl and ulimit programs can be used to set system-wide resource use.

Listing group membership of a user or members of a group in Linux

lid is a handy command-line tool for getting group membership of a user or list of users a group contains.
If you invoke lid just without any option, it will list groups containing the invoking user.
# lid No user name specified, using root. root(gid=0) bin(gid=1) daemon(gid=2) sys(gid=3) adm(gid=4) disk(gid=6) wheel(gid=10)
By default lid lists groups containing user name.
# lid mysql mysql(gid=27)
with -g option lid lists users in a group.
# lid -g users games(uid=12)
If you don't want to display user or group IDs then you should use -n or --onlynames options.
# lid -g -n bin bin root daemon

Linux whatis command and definitions of some commands in /bin and /sbin directories

Linux whatis command searches the whatis database for complete words.
See some command descriptions from whatis database:
addpart (8) - simple wrapper around the add partition ioctl
agetty (8) - alternative Linux getty
arch (1) - print machine hardware name (same as uname -m)
arp (7) - Linux ARP kernel module
arp (8) - manipulate the system ARP cache
arping (8) - send ARP REQUEST to a neighbour host
audispd (8) - an event multiplexor
auditctl (8) - a utility to assist controlling the kernel's audit system
auditd (8) - The Linux Audit daemon

Strict IPTables Rules for postgresql server (Configured to make streaming replication)

IPTables rules script for a postgresql server which is configured as a master or a standby for streaming replication.
#!/bin/sh
# IP address of this server
SERVER_IP=$(/sbin/ifconfig -a | awk '/(cast)/ { print $2 }' | cut -d':' -f2 | head -1)

DNS_SERVER=<write IP address of the dns server>
SSH_CLIENT=<write the IP address from where you make ssh connections>
PGE_SERVER=<write IP address of the other postgresql server>

# Flush iptables rules
iptables -F
iptables -X

# Set default filter policy
iptables -P INPUT DROP
iptables -P OUTPUT DROP
iptables -P FORWARD DROP

# Allow traffic on loopback adapter
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT

# Allow incoming ssh only
iptables -A INPUT -p tcp -s $SSH_CLIENT -d $SERVER_IP --sport 513:65535 --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT  iptables -A OUTPUT -p tcp -s $SERVER_IP -d $SSH_CLIENT --sport 22 --dport 513:65535 -m state --state ESTABLISHED -j ACCEPT

# Allow …

Preserving Linux Shell History Even If Working with Multiple Terminals

If you are continuously running shell commands on more than one linux terminal, probably you want all of the shell (mostly bash) prompts to remember any command from any terminal. With the following environmental variables to save the .bashrc file, you can do it so.

# This is for ignoring duplicate entries export HISTCONTROL=ignoredups:erasedups
# This is for large history export HISTSIZE=102400
# This is for a big history file export HISTFILESIZE=100000
# This is for appending commands to history file shopt -s histappend
# This is for saving and reloading the history after each command is run export PROMPT_COMMAND="history -a; history -c; history -r; $PROMPT_COMMAND"

Preserving links in Linux

Linux commands like tar and cp have some options that control whether symbolic links are followed or not. When you run tar command which is backing up directories contain multiple links to big files, you would get unnecessary copies of the same data. 
In the case of a cp command if a symbolic link is encountered, the data inside of the file to which the link targets is copied when -L (dereference) option used. But if you use -d (no dereference) option, cp would copy the link itself.
Look at the following example;

Setting Up a Workgroup Directory in Linux

The following procedure may be useful to create workgroup folder for a team of people.
The workgroup name is HR and has some members cbing, mgeller, rgreen The folder is /data/hr Only the creators of files in /data/hr folder should be able to delete them. Members shouldn't worry about file ownership, and all members of the group need full access to files. Non-members should not have access to any of the files.
The followings will match the requirements written above:

Extracting an HTML Page Contents with Python's BeautifulSoup4

BeautifulSoup get_text method can be used for stripping html tags and getting page contents.
html_content.py file is like:  # -*- coding: utf-8 -*-
import sys
import os
from bs4 import BeautifulSoup
import requests
if sys.stdout.encoding is None:
    os.putenv("PYTHONIOENCODING", 'UTF-8')
    os.execv(sys.executable, ['python']+sys.argv)
url = sys.argv[1]
page_content = requests.get(url)
text = BeautifulSoup(page_content.text).get_text()
print text
This python code can be run with command line argument like: # python html_content.py http://kadirsert.blogspot.com