Search This Blog

Tuesday, August 26, 2008

Linux Authentication Using OpenLDAP, Part One

Linux Authentication Using OpenLDAP, Part One

Introduction This is the first of two articles that will discuss a number of issues with LDAP authentication on Linux. In this installment, I will discuss an overview of LDAP, installing and configuring OpenLDAP, migrating to OpenLDAP and setting up LDAP queries. In this series, I will focus on Red Hat Linux version 7.1 (with some comments about earlier revisions;) however many of the same principles apply to Debian and other Linux distributions.

Authentication, PAM, and NSS Authentication is the process wherein a user logging on to a Linux system has their credentials checked before being allowed access. Usually, this means that a user needs to provide a login name and a password. Many different programs provide authentication, each using a different method. For example, the basic Unix login program provides a simple text interface for a user to enter a user ID and password. Graphical login systems such as XDM (or GDM or KDM) provide a different interface. Programs such as SSH can authenticate users based on things like RSA or DSA keys as well as passwords. There are many different authentication suites or protocols available on Linux today. Like all traditional Unix systems, Linux is capable of authenticating users against entries in the /etc/passwd and /etc/shadow files, but it also supports such authentication schemes as Kerberos, RADIUS, and LDAP, which stands for Lightweight Directory Access Protocol.

PAM (which stands for Pluggable Authentication Modules) is a set of libraries provided with most modern Linux distributions, and it is installed by default in Red Hat Linux. The PAM libraries provide a consistent interface to an authentication protocol. An application can use the PAM libraries to allow the use of any authentication protocol within that application, so that if the system administrator wants to change from, for example, /etc/passwd authentication to LDAP, the application does not have to be re-written or recompiled. PAM requires a PAM module for each authentication system. There are many such PAM modules on the above web site.

Unfortunately, PAM only provides part of the information needed to keep track of users on a Linux system. In addition to being able to check that a user has entered the correct password, a Linux system needs other information, such as the user's numeric user ID, their home directory, default shell, etc. This information, which would normally be stored in the /etc/passwd file, can be determined through a system interface known as NSS, or Name Service Switch.

Only some authentication schemes provide enough information to be useful to NSS. For example, Kerberos only stores user authentication information, not details such as a home directory or default shell. Therefore, there is no NSS module for Kerberos.

What is LDAP?

LDAP, or Lightweight Directory Access Protocol, is a network protocol that is used for accessing information in an object-oriented database. LDAP includes features that make it useful to both PAM and NSS, as it can authenticate users, as well as providing user information such as home directory names and default shells to NSS.

An LDAP Server or Directory Server (sometimes called a DS for short) is a server that can send and receive information in the LDAP protocol. Typically, an LDAP server will be a piece of software that listens on the standard LDAP ports (389 and sometimes 636) for connections, and responds to LDAP queries and requests. To draw an analogy with databases, LDAP is the equivalent of SQL, and an LDAP server is like a database server such as Oracle or MySQL.

LDAP servers are particularly useful for storing information about people. This is because of the object-oriented nature of LDAP. Unlike a relational database, an object in an LDAP directory can contain an arbitrary number of attributes, and each attribute can have an arbitrary number of values. This is useful for many reasons. For example, a database row containing a column for a phone number would allow a single entry in that phone number column for each row in the database table. A person, however, may have more than one phone number, and so LDAP allows multiple phone numbers to be stored in the same person object. Note the slightly different terminology here: we say an LDAP 'directory' as opposed to a 'database', we call entries in the directory 'objects' instead of 'rows', and we call field values of an object 'attributes' instead of 'columns'.

Why Use LDAP?

There are a number of reasons why we might use LDAP:

  • LDAP allows us to centralize the information about users, passwords, home directories, etc, in a single place on a network. If we were using /etc/passwd files, for example, we would have to make sure that all passwd files were kept in sync across the network, which would be an absolute nightmare on a large network with users changing passwords regularly.
  • LDAP offers encrypted transactions. Most LDAP servers offer encrypted connections using SSL (either using Start TLS on port 389 or LDAPS on port 636), which is more secure than some mechanisms by which plain text passwords are sent over the network. An LDAP directory is also useful for other purposes. For example, it can quickly and easily be used as a company's staff e-mail and contacts directory.
  • It is possible to use LDAP in a tree structured manner, unlike the /etc/passwd or NIS tables which basically store users in a flat structure. With a large number of users it makes sense to divide them into organizational units so that they can be found and managed more easily. In the long term, this makes managing an LDAP directory less onerous than managing /etc/passwd files or an NIS/NIS+ database.

OpenLDAP

OpenLDAP is an open source implementation of an LDAP directory server. OpenLDAP is installed by default with Red Hat 7.1 or later, and is available on Red Hat versions from 6.2 onwards. Note that earlier releases of Red Hat used release 1 of the OpenLDAP product. Although this is still considered a stable release by the OpenLDAP team, for a number of security reasons, I would advise against using it. For example, it does not support SSL or schema checking. Your OpenLDAP version should be 2.0.7-3 or later.

Installing OpenLDAP

As usual, you can install OpenLDAP from the source code by obtaining the source files from the OpenLDAP web site and following the compilation instructions. My preference, however, is to install the OpenLDAP packages from the RPM files as follows. Note that you will need to install both the server and client packages if you want to set up an OpenLDAP server. First, put your Red Hat CD-ROM into your CD-ROM drive and use the following sequence of commands:

1. mount /dev/cdrom /mnt/cdrom
2. cd /mnt/cdrom/RedHat/RPMS
3. rpm -Uhv openldap-2.0.7-14.i386.rpm openldap-servers-2.0.7-14.i386.rpm
openldap-clients-2.0.7-14.i386.rpm
4. umount /mnt/cdrom

It's possible that the packages are already installed on your system (to verify this you can do rpm -q openldap.) It is also possible that you may hit one or more dependencies in installing the above RPMs. In particular, the OpenLDAP packages require the OpenSSL package, and at least the krb5-libs package.

LDAP,PAM and NSS libraries

Using LDAP will almost certainly require you to install the PAM libraries for LDAP. In Red Hat 6.2 and later, these are packaged in with the nss_ldap package (since the pam_ldap libraries are not much use without the nss_ldap libraries and vice-versa). These are normally installed by default - to test this you can run rpm -q nss_ldap. If the nss_ldap package is not installed, you can install it using RPM as follows:

 
 mount /dev/cdrom /mnt/cdrom
 cd /mnt/cdrom/RedHat/RPMS
 rpm -Uhv nss_ldap*.rpm
 umount /mnt/cdrom

If you need to obtain the source code for the pam_ldap and nss_ldap libraries, they are available from PADL.com at the following locations:

Configuring OpenLDAP

Configuration of OpenLDAP is done through the /etc/openldap/slapd.conf file. There is a manual page describing the contents of the slapd.conf file (see man slapd.conf) as well as an excellent administration guide on the OpenLDAP web site. As a starting point, you might like to use the following simple configuration file:

#
# See slapd.conf(5) for details on configuration options.
# This file should NOT be world readable.
#
include         /etc/openldap/schema/core.schema
include         /etc/openldap/schema/cosine.schema
include         /etc/openldap/schema/inetorgperson.schema
include         /etc/openldap/schema/nis.schema
include         /etc/openldap/schema/rfc822-MailMember.schema
include         /etc/openldap/schema/autofs.schema
include         /etc/openldap/schema/kerberosobject.schema

#######################################################################
# ldbm database definitions
#######################################################################

database        ldbm
suffix          "o=MyCompany,c=AU"
rootdn          "uid=root,ou=People,o=MyCompany,c=AU"
rootpw          secret
directory       /var/lib/ldap
# Indices to maintain
index   objectClass,uid,uidNumber,gidNumber     eq
index   cn,mail,surname,givenname               eq,subinitial

#
# ACLs
#

access to dn=".*,ou=People,o=MyCompany,c=AU"
  attr=userPassword
by self write
by dn="uid=root,ou=People,o=MyCompany,c=AU" write
by * auth

access to dn=".*,o=MyCompany,c=AU"
by self write
by dn="uid=root,ou=People,o=MyCompany,c=AU" write
by * read

access to dn=".*,o=MyCompany,c=AU"
by * read

defaultaccess read

One thing that should be noted in the configuration file above: users should replace "o=MyCompany,c=AU" throughout the file with a Base DN which represents their organization. Note that I prefer to use the X.500 style specification above, but you could use the DNS specification which is "dc=mycompany,dc=com,dc=au" or similar. For example, if your company was called "farnarkle.com" you could use "dc=farnarke,dc=com", or you could use "o=farnarke,c=US". Remember this Base DN, it will be important later.

I have elected to include some elementary Access Control in the file. The standard slapd.conf file included with Red Hat Linux does not include ACLs, but they are mandatory for real use. You may want to expand on the above ACLs (see the slapd.conf manual or the administrator's guide.)

I have included a default root password - 'secret'. This is a bad idea once you have data in your LDAP directory. We will deal with this later.

I have not included TLS certificates, keys, or other information. I would consider this to be a security issue on a network, because without these the server will operate entirely in plain text mode. This will be covered later.

Once you have a working slapd.conf file, you should be able to start your server. This is easy enough to do, you can just run the following command:

/etc/rc.d/init.d/ldap start

Provided that the slapd.conf file is correct, you should be able to use pstree to see a running slapd process. If the slapd.conf file is incorrect, look for error messages (running slapd -d here might help), fix up any problems you see, and try again.

Migrating to OpenLDAP

Once you have your LDAP server started, you will have an empty directory. The first thing you need to do is to populate it with data from your existing authentication database.

Using the supplied LDAP tools

OpenLDAP provides a suite of tools to migrate data from your existing NIS or /etc/passwd database into LDAP. If you currently run another authentication scheme such as Kerberos or S/Key, and you are migrating to LDAP, then I'm afraid you are on your own.

In Red Hat Linux 7.1, the migration tools are in /usr/share/openldap/migration/. In Red Hat 6.2 and earlier they were in /usr/lib/openldap/migration/. In either case, open a shell window, change to that directory, and get to work. First, edit the migrate_common.ph file. Around line 72 you will see a couple of lines like this:

  $DEFAULT_MAIL_DOMAIN = "babel.com.au";
 $DEFAULT_BASE = "o=Babel,c=AU";

You will need to edit these two lines, providing your default mail domain and the Base DN that you defined earlier in the slapd.conf file. Next, it's a simple matter of running the migration tools. This can be done using a simple command, assuming that you are migrating from /etc/passwd files to LDAP:

migrate_all_online.sh
(make sure that your LDAP server is running before using the above command). This will ask you for the root DN and password (enter the password secret that we defined in the slapd.conf file), and will start to populate your LDAP directory.

Setting up LDAP queries

Having data in your LDAP directory is all very well and good, but at some stage you are going to want to query that data. There are a standard set of command line-based LDAP query and management tools provided with OpenLDAP. These include ldapadd, ldapmodify, and ldapsearch. Each of these tools has a man page, and you would do well to read these man pages in detail.

The standard configuration file for these tools is /etc/openldap/ldap.conf. The file format of this file is fairly simple; on a single system it need only contain the following two lines:

  BASE o=MyCompany,c=AU
 URI ldap://127.0.0.1
Remember to substitute your Base DN that you defined in the slapd.conf file instead of the o=MyCompany, c=AU entry shown above.

On a network, you may have to substitute the IP address of your LDAP server instead of 127.0.0.1 shown above. For those of you who understand LDAP concepts a little better: OpenLDAP doesn't (yet) support SLP or DNS RR based location, so you have to be fairly precise about the location of the server -- either an IP address, a host name from /etc/hosts, or something that can be found in DNS.

Once you have done that, you should be able to perform a simple search. You could start by looking for your root user by using the following simple command:

  ldapsearch -x 'uid=root'

You should see an entry fairly similar to this one:

version: 2

#
# filter: uid=root
# requesting: ALL
#

# root,People,MyCompany,AU
dn: uid=root,ou=People,o=MyCompany,c=AU
uid: root
cn: root
sn: root
mail: root@mycompany.com.au
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: account
objectClass: posixAccount
objectClass: top
objectClass: kerberosSecurityObject
objectClass: shadowAccount
shadowMax: 99999
shadowWarning: 7
krbName: root@MYCOMPANY.COM.AU
loginShell: /bin/bash
uidNumber: 0
gidNumber: 0
homeDirectory: /root
gecos: root

# search result
search: 2
result: 0 Success

# numResponses: 2
# numEntries: 1

Now that you have come this far, stop and smile. You have managed to get LDAP working, which is sometimes not an easy task!

This brings us to the end of the first installment of this two-part series. In the next article, we will continue the discussion of OpenLDAP and Linux, covering subjects such as: Setting up PAM and NSS for LDAP, LDAP Tools, making OpenLDAP more secure and generating SSL keys for OpenLDAP.

Wednesday, August 20, 2008

Crash Course in Linux File Commands

Although GUI desktops such as KDE and GNOME help users take advantage of Linux features without functional knowledge of the command-line interface, more power and flexibility are often required. Moreover, a basic familiarity with these commands is still essential to properly automate certain functions in a shell script.

This article is a "crash course" in Linux file commands for those who are either new to the operating system or simply in need of a refresher. It includes a brief overview of the more useful commands as well as guidance regarding their most powerful applications. Combined with a little experimentation, the information included here should lead to an easy mastery of these essential commands. (Note: When a kernel tweaked with Oracle Cluster File System (OCFS2) is involved, some of these commands may behave somewhat differently. In that case, Oracle provides an OCFS2 toolset that can be a better alternative for file command purposes.)

Note that all the included examples were tested on SUSE Linux 8.0 Professional. While there is no reason to believe they will not work on other systems, you should check your documentation for possible variations if problems arise.

Background Concepts

Before delving into specifics, let's review some basics.

Files and Commands

Everything is treated as a file in the Linux/UNIX operating system: hardware devices, including the keyboard and the terminal, directories, the commands themselves and, of course, files. This curious convention is, in fact, the basis for the power and flexibility of Linux/UNIX.

Most commands, with few variations, take the form:

command [option] [source file(s)] [target file]

Getting Help

Among the most useful commands, especially for those learning Linux, are those that provide help. Two important sources of information in Linux are the on-line reference manuals, or man pages, and the whatis facility. You can access an unfamiliar command's man page description with the whatis command.

$ whatis echo

To learn more about that command use:

$ man  echo

If you do not know the command needed for a specific task, you can generate possibilities using man -k, also known as apropos, and a topic. For example:

$ man -k files

One useful but often-overlooked command provides information about using man itself:

$ man man

You can scroll through any man page using the SPACEBAR; the UP ARROW will scroll backward through the file. To quit, enter q,!, or CTRL-Z.

Classes of Users

Remember the adage "All animals are equal but some animals are more equal than others?" In the Linux world, the root user rules.

The root user can log in from another username as su, derived from "superuser." To perform tasks such as adding a new user, printer, or file system, log in as root or change to superuser with the su command and root password. System files, including those that control the initialization process, belong to root. While they may be available for regular users to read, for the sake of your system security, the right to edit should be reserved for root.

The BASH shell

Other shells are available, but BASH, the Bourne Again Shell, is the Linux default. It incorporates features of its namesake, the Bourne shell, and those of the Korn, C and TCSH shells.

The BASH built-in command history remembers the last 500 commands entered by default. They can be viewed by entering history at the command prompt. A specific command is retrieved by pressing the UP ARROW or DOWN ARROW at the command prompt, or by entering its number in the history list preceded by "!", such as:

$ !49

You can also execute a command by its offset from the highest entry in the history list: $ !-3 would execute event number 51, if there were 53 events in the history list.

Like other shells in the UNIX/Linux world, BASH uses special environment variables to facilitate system administration. Some examples are:

  • HOME, the user's home directory
  • PATH, the search path Linux uses to search for executable images of commands you enter
  • HISTSIZE, the number of history events saved by your system

In addition to these reserved keywords, you can define your own environment variables. Oracle, for example, uses ORACLE_HOME, among other variables, that must be set in your environment for an Oracle installation to complete successfully.

Variables can be set temporarily at the prompt:

$HISTSIZE=100

Or, set permanently, either on a system wide basis in /etc/profile (requires root privileges), or locally in .profile.

The value of an environment variable can be viewed with the echo command using a $ to access the value.

$ echo $HOME
/home/bluher

All current environment variables can be viewed with env.

Regular Expressions and Wildcards

Many Linux commands use the wildcards * and ? to match any number of characters or any single character respectively; regular pattern-matching expressions utilize a period (.) to match any single character except "new line." Both use square brackets ([ ]) to match sets of characters in addition to *. The *, however, has a similar but different meaning in each case: Although it will match one or more characters in the shell, it matches zero or more instances of the preceding character in a regular expression. Some commands, like egrep and awk, use a wider set of special characters for pattern matching.

File Manipulation Commands

Anatomy of a File Listing

The ls command, used to view lists of files in any directory to which a user has execute permission, has many interesting options. For example:

$ ls -liah *
22684 -rw-r--r--    1 bluher   users         952 Dec 28 18:43 .profile
19942 -rw-r--r--    1 scalish  users          30 Jan  3 20:00 test2.out
925 -rwxr-xr-x    1 scalish  users         378 Sep  2  2002 test.sh

The listing above shows 8 columns:

  • The first column indicates the inode of the file, because we used the -i option. The remaining columns are normally displayed with the -l option.
  • The second column shows the file type and file access permissions.
  • The third shows the number of links, including directories.
  • The fourth and fifth columns show the owner and the group owner of the files. Here, the owner "bluher" belongs to the group "users".
  • The sixth column displays the file size with the units displayed, rather than the default number of bytes, because we used the -h option.
  • The seventh column shows the date, which looks like three columns consisting of the month, day and year or time of day.
  • The eighth column has the filenames. Use of -a in the option list causes the list of hidden files, like .profile, to be included in the listing.
Working with Files

Files and directories can be moved (mv), copied (cp) or removed (rm). Judicious use of the -i option to get confirmation is usually a good idea.

$ cp -i ls.out ls2.out
cp: overwrite `ls2.out'?

The mv command allows the -b option, which makes a backup copy before moving files. Both rm and cp accept the powerful, but dangerous, -r option, which operates recursively on a directory and its files.

$ rm -ir Test
rm: descend into directory `Test'? y

Directories can be created with mkdir and removed with rmdir. However, because a directory containing files cannot be removed with rmdir, it is frequently more convenient to use rm with the -r option.

All files have ownership and protections for security reasons. The file access permissions, or filemode, comprise the same 10 characters described previously:

  • The first character indicates the type of file. The most common are - for a file, d for a directory, and l for a link.
  • The next nine characters are access permissions for three classes of users: the file owner (characters 2-4), the user's group (5-7) and others (8-10), where r signifies read permission, w means write permission, and x designates execute permission on a file. A dash, -, found in any of these nine positions indicates that action is prohibited by that class of user.

Access permissions can be set with character symbols or, binary masks using the chmod command. To use the binary masks, convert the character representation of the three groups of permissions into binary and then into octal format:

User class:OwnerGroupOthers
character representation:rwxr-xr--
binary representation:111101100
octal representation:754

To give write permission to the group, you could use:

chmod g+w test.sh or chmod 774 test.sh

Default file permissions are set with the umask command, either systemwide in the /etc/init.dev file or locally in the .profile file. This command indicates the number to be subtracted from 777 to obtain the default permissions:

$ umask 022

This would result in a default file permission of 755 for all new files created by the user.

A file's ownership can be changed with chown:

$ chown bluher ls.out

Here, bluher is the new file owner. Similarly, group membership would be changed as follows:

$ chgrp devgrp ls.out

Here, devgrp is the new group.

One piece of information that ls does not provide is which files are text, and which are binary. To find this information, you can use the file * command.

Renaming Files

Two popular ways to give a file more than one name are with links and the alias command. Alias can be used to rename a longer command to something more convenient such as:

$ alias ll='ls -l'
$ ll

Notice the use of single quotes so that BASH passes the term on to alias instead of evaluating it itself. Alias can also be used as an abbreviation for lengthy pathnames:

$ alias jdev9i=/jdev9i/jdev/bin/jdev

For more information on alias and its counter-command unalias, check the man page for BASH, under the subsection "SHELL BUILTIN COMMANDS". In the last example, an environment variable could have been defined to accomplish the same result.

$ export JDEV_HOME=/jdev9i/jdev/bin/jdev
$ echo $JDEV_HOME
/jdev9i/jdev/bin/jdev
$ $JDEV_HOME

Links allow several filenames to refer to a single source file using the following format:

ln [-s] fileyouwanttolinkto newname

The ln command alone creates a hard link to a file, while using the -s option creates a symbolic link. Briefly, a hard link is almost indistinguishable from the original file, except that the inodes of the two files will be the same. Symbolic links are easier to distinguish because they appear in a long file listing with a -> indicating the source file and an l for the filetype.

Looking In and For Files

File Filters

Commands used to read and perform operations on file contents are sometimes referred to as filters. The sed and awk commands, already discussed at length in previous OTN articles, are two examples of filters that will not be discussed here.

Commands such as cat, more, and less let you view the contents of a text file from the command line, without having to invoke an editor. Cat is short for "concatenate" and will print the file contents to standard output (the screen) by default. One of the most interesting options available with cat is the -n option, which prints the file contents with numbered output lines.

$ cat -n test.out
  1  This is a test.

As cat outputs all lines in a file at once, you may prefer to use more and less because they both output file contents one screen at a time. Less is an enhanced version of more that allows key commands from the vi text editor to enhance file viewing. For example, d scrolls forward and b scrolls backward N lines (if N is specified before d or b.) The value entered for N becomes the default for subsequent d commands. The man page utility uses less to display manual contents.

Redirection and Pipes

Redirection allows command output to be "redirected" to a file other than standard output, or, input. The standard symbol for redirection, >, creates a new file. The >> symbol appends output to an existing file:

$ more test2.out
Another test.
$ cat test.out >> test2.out
$ cat test2.out
Another test.
This is a test.
Standard input to a file can be redirected with the <>

$ cat <>  

Error messages are redirected and appended with 2> and 2>> using the format:

$ command 2> name_of_error_file

To avoid unintentionally overwriting an existing file, use the BASH built-in command set:

$ set -o noclobber

This feature can be overridden using the >! symbol between your command and output file. To turn it off, use +o in place of -o.

Redirection works between a command, or file, and a file. One term of the redirection statement must be a file.

Pipes use the |symbol and work between commands. For instance, you could send the output of a command directly to the printer with:

$ ls -l * | lpr

A command in the history list can be found quickly with:

$ history | grep cat

More Filters

Grep, fgrep and egrep all print lines matching a pattern. All three commands search files for a specified pattern, which is helpful if you can't remember the name of a needed file. The basic format is:

grep [options] PATTERN [FILE...]

$ grep -r 'Subject' nsmail

CTRL-Z will terminate output of the above or any other command.

Perhaps the most useful option with grep is -s. If you search through system files as anything other than root, error messages will be generated for every file to which you do not have access permission. This command suppresses those messages.

Fgrep, also invoked as grep -F, looks only for fixed strings, rather than the regular expressions that grep accepts. While egrep accepts patterns containing a wider selection of special characters, such as |, which signifies the conditional OR operator.

$ egrep 'Subject|mailto' *

Finding Files

The GNU version of the find command is powerful, flexible and more forgiving than classic versions found on UNIX systems. It is useful for tasks involving a directory structure, including finding and executing commands on files. The basic format of the find command is:

$ find startdirectory options matchcriteria [actionoptions] 

If you know the name of a file, or even part of the name, but not the directory it is in, you can do this:

$ find . -name 'test*'
./test
./jdevhome/mywork/EmpWS/EmpBC4J/test

Unlike classic UNIX systems, the -print action at the end is not required in Linux, as it is assumed if no other action option is designated. A dot ( . ) in the startdirectory position causes find to begin a search in your working directory. A double dot, .., begins a search in the parent directory. You can start a search in any directory.

Note that you can use wildcards as part of the search criteria as long as you enclose the whole term in single quotes.

$ find . -name 'test*' -print
./test.out
./test2.out

To produce a list of files with the .out extension:

$ find /home -name '*.out'

Remember, however, that you will probably get numerous "Permission denied" error messages unless you run the command as supersuser.

One of the most powerful search tools is the -exec action used with grep:

$ find . -name '*.html' -exec grep 'mailto:foo@yahoo.com' {} \;

, look for an html file, *.html, and execute -exec the grep command on the current file, {}. When using the -exec action, a semicolon, ;, is required, as it is for a few other actions when using find. The backslash, \, and quotes are needed to ensure that BASH passes these terms through so they are interpreted by the command rather than the shell.

Now in Command

There are many more useful commands available in Linux, and powerful ways to utilize them, than can be covered here. Moreover, there is often more than one way to accomplish many tasks.

We have looked at only some of the most commonly used and instructive Linux file commands. A mastery of these basic but critical tools should move your Linux education to the fast track. With the man pages at your fingertips, and a willingness to experiment, you now have enough information to begin exploring the power of Linux file operations.

In my next article, I'll provide a similar explanation of Linux system commands.

Squid (Proxy Server Software)

We all know that using proxy is a method to connect to Internet for the LAN user. But do you know how to make a proxy server in your PC? There are many proxy server softwares in Windows such as WinGate and SyGate. However I will introduce you a Linux software today, it names Squid. You can find it in most versions Linux OS.

I will give you a little basic knowledge first. A proxy server software is based on the TCP/IP protocol. It monitors a special port such as 3128. A computer who runs a proxy server software is called a proxy server. If other computer want to connect to Internet through the proxy server, it should know the proxy server's IP address and proxy port such as 3128, which is used to config the communication software such as IE and ICQ.

The main function of proxy server is:

  • The proxy server can cache the website content that the clients visited, which can speed up the second visit.
  • The proxy server can give you access to the forbidden site. For example, the LAN administrator forbid your access to my-proxy.com, but you can also visit it through a proxy.
  • The proxy server can control the accesses of its clients. I will tell you more about it below.

Maybe you know another Linux software IPchains, which can also used as a access control tool. But the problem is that IPchains doesn't support DNS parsing. You have to list all the IP address of the websites you want to control. However it's different for Squid, you can simply forbid the access to the domain whose suffix is .tw or .cn by Squid while the DNS parsing is the work of ISP.

Now I will give you a example. We use a PC which has two network cards as our proxy server. The first network (eth0) connects to local area network (LAN) and the second one (eth1) connects to Internet. We use the RedHat Linux 8.0 and Squid (which comes with the OS).

Just like other Linux software, Squid works according to its config files. Its default config file is /etc /squid /squid.conf. It is more than ten pages and contains the config specification. However there are only a small part of them we will use, I list the most important options below. Most of them are open-and-shut.

  http_port 3128

  #the port that the proxy server monitors

  cache_dir /var/cache/squid 100 16 32

  #cache dir size(MB), the number of first level subdir, the number of second level subdir

  cache_access_log /var/log/squid/access.log

  cache_log /var/log/squid/cache.log

  acl all src 0.0.0.0/0.0.0.0

  acl head src 192.168.0.2/255.255.255.255 192.168.0.3/255.255.255.255

  acl normal src 192.168.0.21-192.168.0.99/255.255.255.255

  acl denysite dstdomain tw cn

  acl denyip dst 61.136.135.04/255.255.255.255

  acl dnsport port 53

  http_access allow head

  http_access deny denysite

  http_access deny denyip

  http_access allow normal

  http_access deny dnsport

We can know from the config file that:

  • Squid will monitor the port 3128
  • The cache dir is /var/cache/squid and its size is 100MB
  • The users 192.168.0.2 and 192.168.0.3 can access all the websites
  • The users 192.168.0.21-192.168.0.99 can't visit the website whose domain suffix is .tw or .cn
  • The users 192.168.0.21-192.168.0.99 can not visit the website whose IP is 61.136.135.4
  • Other users can not connect to server whose port is 53

It's obvious that the config file use keyword "acl" to define user groups & destination groups and use "http_access" to control the access of the groups. There different keywords after "acl" such as "src","dst","proto","port" and "dstdomain". You can also use "acl

Notice that the execution order is from the top down. The judgement (allow or deny) is made when the first appears in the "http_access" case, it won't go through all the case. So it's useless to add "http_access deny head" after "http_access allow head".

If a user is not included in any of the acl groups, the default control of its access is the reverse of the last "http_access" case. For example, the user 192.168.0.5 is allowed to use the Internet though it is not defined in any group. So you had best add "http_access deny all" at the end of the config file.

Thursday, August 14, 2008

Iptables Basics

I'm sure many of you have been wondering how to use iptables to set up a basic firewall. I was wondering the same thing for a long time until I recently figured it out. I'll try to explain the basics to at least get you started.

First you need to know how the firewall treats packets leaving, entering, or passing through your computer. Basically there is a chain for each of these. Any packet entering your computer goes through the INPUT chain. Any packet that your computer sends out to the network goes through the OUTPUT chain. Any packet that your computer picks up on one network and sends to another goes through the FORWARD chain. The chains are half of the logic behind iptables themselves.

Now the way that iptables works is that you set up certain rules in each of these chains that decide what happens to packets of data that pass through them. For instance, if your computer was to send out a packet to www.yahoo.com to request an HTML page, it would first pass through the OUTPUT chain. The kernel would look through the rules in the chain and see if any of them match. The first one that matches will decide the outcome of that packet. If none of the rules match, then the policy of the whole chain will be the final decision maker. Then whatever reply Yahoo! sent back would pass through the INPUT chain. It's no more complicated than that.

Now that we have the basics out of the way, we can start working on putting all this to practical use. There are a lot of different letters to memorize when using iptables and you'll probably have to peek at the man page often to remind yourself of a certain one. Now let's start with manipulation of certain IP addresses. Suppose you wanted to block all packets coming from 200.200.200.1. First of all, -s is used to specify a source IP or DNS name. So from that, to refer to traffic coming from this address, we'd use this:

iptables -s 200.200.200.1

But that doesn't tell what to do with the packets. The -j option is used to specify what happens to the packet. The most common three are ACCEPT, DENY, and DROP. Now you can probably figure out what ACCEPT does and it's not what we want. DENY sends a message back that this computer isn't accepting connections. DROP just totally ignores the packet. If we're really suspicious about this certain IP address, we'd probably prefer DROP over DENY. So here is the command with the result:

iptables -s 200.200.200.1 -j DROP

But the computer still won't understand this. There's one more thing we need to add and that's which chain it goes on. You use -A for this. It just appends the rule to the end of whichever chain you specify. Since we want to keep the computer from talking to us, we'd put it on INPUT. So here's the entire command:

iptables -A INPUT -s 200.200.200.1 -j DROP

This single command would ignore everything coming from 200.200.200.1 (with exceptions, but we'll get into that later). The order of the options doesn't matter; the -j DROP could go before -s 200.200.200.1. I just like to put the outcome part at the end of the command. Ok, we're now capable of ignoring a certain computer on a network. If you wanted to keep your computer from talking to it, you'd simply change INPUT to OUTPUT and change the -s to -d for destination. Now that's not too hard, is it?

So what if we only wanted to ignore telnet requests from this computer? Well that's not very hard either. You might know that port 23 is for telnet, but you can just use the word telnet if you like. There are at least 3 protocols that can be specified: TCP, UDP, and ICMP. Telnet, like most services, runs on TCP so we're going with it. The -p option specifies the protocol. But TCP doesn't tell it everything; telnet is only a specific protocol used on the larger protocol of TCP. After we specify that the protocol is TCP, we can use --destination-port to denote the port that they're trying to contact us on. Make sure you don't get source and destination ports mixed up. Remember, the client can run on any port, it's the server that will be running the service on port 23. Any time you want to block out a certain service, you'll use --destination-port. The opposite is --source-port in case you need it. So let's put this all together. This should be the command that accomplishes what we want:


iptables -A INPUT -s 200.200.200.1 -p tcp --destination-port telnet -j DROP
 
And there you go. If you wanted to specify a range of IP's, you could use 200.200.200.0/24. This would specify any IP that matched 200.200.200.*. Now it's time to fry some bigger fish. Let's say that, like me, you have a local area network and then you have a connection to the internet. We're going to also say that the LAN is eth0 while the internet connection is called ppp0. Now suppose we wanted to allow telnet to run as a service to computers on the LAN but not on the insecure internet. Well there is an easy way to do this. We can use -i for the input interface and -o for the output interface. You could always block it on the OUTPUT chain, but we'd rather block it on the INPUT so that the telnet daemon never even sees the request. Therefore we'll use -i. This should set up just the rule:

iptables -A INPUT -p tcp --destination-port telnet -i ppp0 -j DROP
 
So this should close off the port to anyone on the internet yet kept it open to the LAN. Now before we go on to more intense stuff, I'd like to briefly explain other ways to manipulate rules. The -A option appends a rule to the end of the list, meaning any matching rule before it will have say before this one does. If we wanted to put a rule before the end of the chain, we use -I for insert. This will put the rule in a numerical location in the chain. For example, if we wanted to put it at the top of the INPUT chain, we'd use "-I INPUT 1" along with the rest of the command. Just change the 1 to whatever place you want it to be in. Now let's say we wanted to replace whatever rule was already in that location. Just use -R to replace a rule. It has the same syntax as -I and works the same way except that it deletes the rule at that position rather than bumping everything down. And finally, if you just want to delete a rule, use -D. This also has a similar syntax but you can either use a number for the rule or type out all the options that you would if you created the rule. The number method is usually the optimal choice. There are two more simple options to learn though. -L lists all the rules set so far. This is obviously helpful when you forget where you're at. AND -F flushes a certain chain. (It removes all of the rules on the chain.) If you don't specify a chain, it will basically flush everything.

Well let's get a bit more advanced. We know that these packets use a certain protocol, and if that protocol is TCP, then it also uses a certain port. Now you might be compelled to just close all ports to incoming traffic, but remember, after your computer talks to another computer, that computer must talk back. If you close all of your incoming ports, you'll essentially render your connection useless. And for most non-service programs, you can't predict which port they're going to be communicating on. But there's still a way. Whenever two computers are talking over a TCP connection, that connection must first be initialized. This is the job of a SYN packet. A SYN packet simply tells the other computer that it's ready to talk. Now only the computer requesting the service sends a SYN packet. So if you only block incoming SYN packets, it stops other computers from opening services on your computer but doesn't stop you from communicating with them. It roughly makes your computer ignore anything that it didn't speak to first. It's mean but it gets the job done. Well the option for this is --syn after you've specified the TCP protocol. So to make a rule that would block all incoming connections on only the internet:

iptables -A INPUT -i ppp0 -p tcp --syn -j DROP

That's a likely rule that you'll be using unless you have a web service running. If you want to leave one port open, for example 80 (HTTP), there's a simple way to do this too. As with many programming languages, an exclamation mark means not. For instance, if you wanted to block all SYN packets on all ports except 80, I believe it would look something like this:


iptables -A INPUT -i ppp0 -p tcp --syn --destination-port ! 80 -j DROP
 
It's somewhat complicated but it's not so hard to comprehend. There's one last thing I'd like to cover and that's changing the policy for a chain. The chains INPUT and OUTPUT are usually set to ACCEPT by default and FORWARD is set to DENY. Well if you want to use this computer as a router, you would probably want to set the FORWARD policy to ACCEPT. How do we do this you ask? Well it's really very simple. All you have to do is use the -P option. Just follow it by the chain name and the new policy and you have it made. To change the FORWARD chain to an ACCEPT policy, we'd do this:

iptables -P FORWARD ACCEPT

Nothing to it, huh? This is really just the basics of iptables. It should help you set up a limited firewall but there's still a lot more that I couldn't talk about. You can look at the man page "man iptables" to learn more of the options (or refresh your memory when you forget). You can find more advanced documents if you want to learn some of the more advanced features of iptables. At the time of this writing, iptables documents are somewhat rare because the technology is new but they should be springing up soon. Good luck.

Wednesday, August 13, 2008

AJAX: Is your application secure enough?

Introduction

We see it all around us, recently. Web applications get niftier by the day by utilising the various new techniques recently introduced in a few web-browsers, like I.E. and Firefox. One of those new techniques involves using Javascript. More specifically, the XmlHttpRequest-class, or object.

Webmail applications use it to quickly update the list of messages in your Inbox, while other applications use the technology to suggest various search-queries in real-time. All this without reloading the main, sometimes image- and banner- ridden, page. (That said, it will most probably be used by some of those ads as well.)

Before we go into possible weaknesses and things to keep in mind when implementing an AJAX enabled application, first a brief description of how this technology works.

The Basics

Asynchronous Javascript and XML, dubbed AJAX is basically doing this. Let me illustrate with an example, an email application. You are looking at your Inbox and want to delete a message. Normally, in plain HTML applications, the POST or GET request would perform the action, and re-locate to the Inbox, effectively reloading it.

With the XmlHttpRequest-object, however, this request can be done while the main page is still being shown.

In the background a call is made which performs the actual action on the server, and optionally responds with new data. (Note that this request can only be made to the web-site that the script is hosted on: it would leave massive DoS possibilities if I can create an HTML page that, using Javascript, can request thousands of concurrent web-pages from a web-site. You can guess what happens if a lot of people would visit that page.)

The Question

Some web-enabled applications, such as for email, do have pretty destructive functionality that could possibly be abused. The question is — will the average AJAX-enabled web-application be able to tell the difference between a real and a faked XmlHttpRequest?

Do you know if your recently developed AJAX-enabled or enhanced application is able to do this? And if so — does it do this adequately?

Do you even check referrers or some trivial token such as the user-agent? Chances are you do not even know. Chances are that other people, by now, do.

To be sure that the system you have implemented — or one you are interested in using — is properly secured, thus trustworthy, one has to ’sniff around’.

Incidentally, the first time I discovered such a thing was in a lame preview function for a lame ringtone-site. Basically, the XmlHttpRequest URI’s ‘len’ parameter specified the length of the preview to generate and it seemed like it was loading the original file. Entering this URI in a browser (well, actually, ‘curl‘), specifying a very large value, one could easily grab all the files.

This is a fatal mistake: implement an AJAX interface accepting GET requests. GET requests are the easiest to fake. More on this later.

The question is — can we perform an action while somebody is logged in somewhere else. It is basically XSS/CSS (Cross Site Scripting) but then again, it isn’t.

My Prediction

Some popular applications I checked are hardened in such a way that they use some form of random sequence numbering: the server tells it, encoded, what the application should use as a sequence number when sending the next command. This is mostly obscured by Javascript and a pain in the ass to dissect — but not impossible.

And as you may have already noted: if there is improper authentication on the location called by the XmlHttpRequest-object, this would leave a possibility for malicious purpose. This is exactly where we can expect weaknesses and holes to arise.There should be proper authentication in place. At all times.

As all these systems are built by men, chances are this isn’t done properly.

HTTP traffic analysis Analysing HTTP traffic analysis with tools like ethereal (yeh I like GUIs so sue me) surely comes in handy to figure out whether applications you use are actually safe from exploitation. This application allows one to easily filter and follow TCP streams so one can properly analyse what is happening there.

If you want to investigate your own application, the use of a sniffer isn’t even necessary but I would suggest you let a colleague that hasn’t implemented it, play around with your app and a sniffer in an attempt to ‘break’ through it.

Cookies Cookies are our friend when it comes to exploiting, I mean researching any vulnerabilities in AJAX implementations.

If the XmlHttp-interface is merely protected by cookies, exploiting this is all the easier: the moment you get the browser to make a request to that website, your browser is happily sending any cookies along with it.

Back to my earlier remark about a GET-requests being a pretty lame implementation: from a developers point of view, I can imagine one temporary accepts GET requests to be able to easily debug stuff without having to constantly enter irritating HTTP data using telnet. But when you are done with it you really should disable it immediately!

I could shove a GET request hidden in an image link. Sure the browser doesn’t understand the returned data which might not even be an image. But my browser does happily send any authenticating cookies, and the web-application on the other end will have performed some operation.

Using GET is a major mistake-a-to-make. POST is a lot better, as it harder to fake. The XmlHttpRequest can easily do a POST. But I cannot get a script, for instance I could have embedded one in this article, to do a POST request to another website because of the earlier noted restriction: you can only request to the same web-site the web-application is on.

One can modify its own browser, to make request to other websites, but it would be hard to get the browser on somebody elses machine to do this.

Or would it?

If proper authentication, or rather credential verification, still sucks, I can still set up a web-site that does the exact POST method that the AJAX interface expects. That will be accepted and the operation will be performed. Incidentally I have found a popular site that, so far, does not seem to have proper checks in place. More on that one in another article.

Merely using cookies is again a bad idea.

One should also check the User-Agent and possibly a Referrer (the XmlHttpRequest nicely allows one to send any additional headers so you could just put some other token in the Referrer-field). Sure these can still be faked — but it may fend off some investigating skiddiots.

Sequence Numbering, kinda…

A possible way of securing one’s application is using some form of ’sequence-numbering’-like scheme.

Roughly, this boils down to this.

One should let the page, or some include javascript, generated on the server side, include some token that the performs some operation on which gives a result which is used in any consecutive request to the webserver. The webserver should not allow any request with another ’sequence number’, so to speak.

The servers’ ‘challenge-string‘ should be as random as possible in order to make it non-predictable: if one could guess what the next sequence number will be, it is again wide open for abuse.

There are properly other ways of hardening interfaces like this, but they all basically come down to getting some fixed information from the webserver as far away from the end-users reach as possible.

You can implement this as complex as you want but can be implemented very basic as well.

For instance when I, as a logged-in user of a web-enabled email-application get assigned a Session-ID and stuff, the page that my browser receives includes a variable iSeq which contains an non-predictable number. When I click “Delete This Message”, this number is transmitted with the rest of the parameters. The server can then respond with new data and, hidden in the cookies or other HTTP Requests field, pass the next sequence number that the web-server will accept as a valid request, only.

As far as I know, these seems the only way of securing it. This can still be abused if spyware sniffs HTTP communications — which they recently started doing.

Javascript Insertion

On a side note I wanted to throw in a remark on Javascript Insertion. This is an old security violation and not really restricted to AJAX, and not an attack on AJAX. Rather, it is an attack utilising the XmlHttpRequest object for malice.

If I would be able to insert Javascript in the web-application I am currently looking at in my other browser window, I would be able to easily delete any post the site allows me to delete. Now that doesn’t seem all that destructive as it only affects that user? Wrong, any user visiting will have its own posts deleted. Ouch.

Javascript insertion has been a nasty one for years and it still is when people throw their home-brew stuff into production.

On a weak implemented forum or web-journal, one could even post new messages — including the Javascript so that any visitor — with the proper permission — would re-post the message keeping the flood of spam coming.

As these technologies keep developing — and lazy website developers do not update their websites to keep up with these changes.

The recent ‘AJAX enhancements’ that some sites got recently might have been improperly implemented. This year might be a good time to check all those old web-applications for any possible Javascript insertion tricks.

If you didn’t mind the cookies getting caught — the sudden deletion of random items and/or public embarrassment might be something to entice you to verify your the code.

Tuesday, August 12, 2008

How To Use Proxy Server To Access Internet at Shell Prompt With http_proxy Variable

Q. I'm behind a squid proxy server. How do I access internet via proxy server when I use wget, lynx and other utilities from a shell prompt?

A. Linux / UNIX has environment variable called http_proxy. It allows you to connect text based session / application via the proxy server. All you need is proxy server IP and port values. This variable is almost used by all utilities such as elinks, lynx, wget, curl and others.

Set http_proxy shell variable Type the following command to set proxy server:
$ export http_proxy=http://server-ip:port/
$ export http_proxy=http://127.0.0.1:3128/
$ export http_proxy=http://proxy-server.mycorp.com:3128/


How do I setup proxy variable for all users?
To setup the proxy environment variable as a global variable,

  • open /etc/profile file:
  • #vi /etc/profile
  • Add the following information: 
  • export http_proxy=http://proxy-server.mycorp.com:3128/ 
  • Save and close the file. 


How do I use password protected proxy server? 
You can simply use wget as follows:
  • $ wget --proxy-user=USERNAME --proxy-password=PASSWORD http://path.to.domain.com/some.html
  • Lynx has following syntax: $ lynx -pauth=USER:PASSWORD http://domain.com/path/html.file
  • Curl has following syntax: $ curl --proxy-user user:password http://url.com/

How to: Extract files from ISO CD images in Linux

Under many situations you may need to get a single file/many files from Linux ISO image. You can mount ISO images via the loop device. You need to use mount command. First login as a root user: Extract File(s) Under Linux OS Let us assume that your ISO image name is disk1.iso. Step # 1: First you need to create a directory /mnt/iso # mkdir /mnt/iso # mount -o loop disk1.iso /mnt/iso Step # 3: Extract file Now you can easily copy file called file.txt from iso disk image to /tmp directory : # cd /mnt/iso # cp file.txt /tmp Step # 4: Copy foo.rpm from ISO disk image: # cd /mnt/iso/RedHat/RPMS # cp foo.rpm /tmp Extract File(s) Under Windows XP or Vista Os Windows do not have in built capability as provided by Linux to extract file. Luckly many third party software exist my favorite is Winimage http://www.winimage.com/. Download trial version (I’m sure you will love to registered this tiny utility later): 1) Install Winimage software 2) Just double click on Linux ISO file 3) Select the desired file and hit CTRL + X (or from Image menu select extract) For more information read man pages: man cp man mv man rpm man mount man mkdir

Friday, August 8, 2008

SSH Public key based authentication - Howto

This howto covers generating and using ssh keys for automated:

a) Login

b) Make backups

c) Run commands from shell etc

Task: Generating ssh keys

1) Log on to your workstation ( for example log on to workstation called admin.fbsd.nixcraft.org as vivek user). Please refer the following sample setup - You will be log in, on

2) Create the Cryptographic Key on FreeBSD workstation, enter:

$ ssh-keygen -t rsa

Assign the pass phrase (press [enter] key twice if you don't want a passphrase). It will create 2 files in ~/.ssh directory as follows:

  • ~/.ssh/id_rsa : identification (private) key
  • ~/.ssh/id_rsa.pub : public key

3) Use scp to copy the id_rsa.pub (public key) to rh9linux.nixcraft.org server as authorized_keys2 file, this is know as Installing the public key to server.

$ scp .ssh/id_rsa.pub vivek@rh9linux.nixcraft.org:.ssh/authorized_keys2

4) From FreeBSD workstation login to server:

$ ssh rh9linux.nixcraft.org

5) Changing the pass-phrase on workstation (if needed):

$ ssh-keygen -p

6) Use of ssh-agent to avoid continues pass-phrase typing At freebsd workstation type:

$ ssh-agent $BASH
$ ssh-add

Type your pass-phrase

From here, whenever connecting to server it won’t ask for password. Above two commands can be added to ~/.bash_profile so that as soon as I login into workstation I can set the agent.

7) Deleting the keys hold by ssh-agent

a) To delete all keys

$ ssh-add -D 

b) To delete specific key

$ ssh-add -d key

c) To list keys

$ ssh-add -l

Thursday, August 7, 2008

Lshw (Hardware Lister)

Lshw (Hardware Lister) is a small tool to provide detailed information on the hardware configuration of the machine. It can report exact memory configuration, firmware version, mainboard configuration, CPU version and speed, cache configuration, bus speed, etc. on DMI-capable x86 or EFI (IA-64) systems and on some PowerPC machines.Information can be output in plain text, XML or HTML. lshw currently supports DMI (x86 and EFI only), OpenFirmware device tree (PowerPC only), PCI/AGP, ISA PnP (x86), CPUID (x86), IDE/ATA/ATAPI, PCMCIA (only tested on x86), USB and SCSI. you can download this package by clicking here LSHW: OR u can directly install it using yum command yum install lshw

Wednesday, August 6, 2008

C# on Linux (mcs to compile and mono to execute the exe)

  • Write the following program in a Vi Editor and it save it in a file named HelloInteractive.cs.
using System; class InteractiveWelcome { public static void Main() { Console.Write("What is your name?: "); Console.Write("Hello, {0}! ", Console.ReadLine()); Console.WriteLine("Welcome to the C# Station Tutorial!"); Console.ReadLine(); } }
  • Compile the above program using mcs
  • Run the above program mono

Tuesday, August 5, 2008

Tags so that browser doesnt load a web page from cache

<HTML>
<HEAD>
<TITLE>---</TITLE>
<META HTTP-EQUIV="Pragma" CONTENT="no-cache">
<META HTTP-EQUIV="Expires" CONTENT="-1">
</HEAD>
<BODY>

Text in the Browser Window

</BODY>
<HEAD>
<META HTTP-EQUIV="Pragma" CONTENT="no-cache">
<META HTTP-EQUIV="Expires" CONTENT="-1">
</HEAD>
</HTML>


Howto Linux / UNIX setup SSH with DSA public key authentication (password less login)

Q. How do you set-up SSH with DSA public key authentication? I have Linux laptop called tom and remote Linux server called jerry. How do I setup DSA based authentication so I don’t have to type password?

A. DSA public key authentication can only be established on a per system / user basis only i.e. it is not system wide. You will be setting up ssh with DSA public key authentication for SSH version 2 on two machines:

#1 machine : your laptop called tom #2 machine : your remote server called jerry

Command to type on your laptop/desktop (local computer)

First login to local computer called tom and type the following command.

Step #1: Generate DSA Key Pair

Use ssh-keygen command as follows: $ ssh-keygen -t dsa Output:

Enter file in which to save the key (/home/vivek/.ssh/id_dsa):  Press [Enter] key
Enter passphrase (empty for no passphrase): myPassword
Enter same passphrase again: myPassword
Your identification has been saved in /home/vivek/.ssh/id_dsa.
Your public key has been saved in /home/vivek/.ssh/id_dsa.pub.
The key fingerprint is:
04:be:15:ca:1d:0a:1e:e2:a7:e5:de:98:4f:b1:a6:01 vivek@vivek-desktop

Caution: a) Please enter a passphrase different from your account password and confirm the same. b) The public key is written to /home/you/.ssh/id_dsa.pub. c) The private key is written to /home/you/.ssh/id_dsa. d) It is important you never-ever give out your private key.

Step #2: Set directory permission

Next make sure you have correct permission on .ssh directory: $ cd $ chmod 755 .ssh

Step #3: Copy public key

Now copy file ~/.ssh/id_dsa.pub on Machine #1 (tom) to remote server jerry as ~/.ssh/authorized_keys: $ scp ~/.ssh/id_dsa.pub user@jerry:.ssh/authorized_keys

Command to type on your remote server called jerry

Login to your remote server and make sure permissions are set correct: $ chmod 600 ~/.ssh/authorized_keys

Task: How do I login from client to server with DSA key?

Use scp or ssh as follows from your local computer: $ ssh user@jerry $ ssh user@remote-server.com $ scp file user@jerry:/tmp

You will still be asked for the passphrase for the DSA key file each time you connect to remote server called jerry, unless you either did not enter a passphrase when generating the DSA key pair.

Task: How do I login from client to server with DSA key but without typing a passhrase i.e. password-less login?

Type the following command at shell prompt: $ exec /usr/bin/ssh-agent $SHELL $ ssh-add Output:

Enter passphrase for /home/vivek/.ssh/id_dsa: myPassword
Identity added: /home/vivek/.ssh/id_dsa (/home/vivek/.ssh/id_dsa)

Type your passhrase once. Now, you should not be prompted for a password whenever you use ssh, scp, or sftp command.

If you are using GUI such as Gnome use the command: $ ssh-askpass OR $ /usr/lib/openssh/gnome-ssh-askpass

To save your passphrase during your GNOME session under Debian / Ubuntu, do as follows: a) Click on System b) Select Preferences c) Select Session d) Click on New e) Enter "OpenSSH Password Management" in the Name text area f) Enter /usr/lib/openssh/gnome-ssh-askpass in the command text area. g) Click on close to save the changes h) Log out and then log back into GNOME. After GNOME is started, a dialog box will appear prompting you for your passphrase. Enter the passphrase requested. From this point on, you should not be prompted for a password by ssh, scp, or sftp.

Friday, August 1, 2008

Static and Dynamic Routers

For routing between routers to work efficiently in an internetwork, routers must have knowledge of other network IDs or be configured with a default route. On large internetworks, the routing tables must be maintained so that the traffic always travels along optimal paths. How the routing tables are maintained defines the distinction between static and dynamic routing.

Static Routing

A router with manually configured routing tables is known as a static router. A network administrator, with knowledge of the internetwork topology, manually builds and updates the routing table, programming all routes in the routing table. Static routers can work well for small internetworks but do not scale well to large or dynamically changing internetworks due to their manual administration.

Static routers are not fault tolerant. The lifetime of a manually configured static route is infinite and, therefore, static routers do not sense and recover from downed routers or downed links.

A good example of a static router is a multihomed computer running Windows 2000 (a computer with multiple network interface cards). Creating a static IP router with Windows 2000 is as simple as installing multiple network interface cards, configuring TCP/IP, and enabling IP routing.

Dynamic Routing

A router with dynamically configured routing tables is known as a dynamic router. Dynamic routing consists of routing tables that are built and maintained automatically through an ongoing communication between routers. This communication is facilitated by a routing protocol, a series of periodic or on-demand messages containing routing information that is exchanged between routers. Except for their initial configuration, dynamic routers require little ongoing maintenance, and therefore can scale to larger internetworks.

Dynamic routing is fault tolerant. Dynamic routes learned from other routers have a finite lifetime. If a router or link goes down, the routers sense the change in the internetwork topology through the expiration of the lifetime of the learned route in the routing table. This change can then be propagated to other routers so that all the routers on the internetwork become aware of the new internetwork topology.

The ability to scale and recover from internetwork faults makes dynamic routing the better choice for medium, large, and very large internetworks.

A good example of a dynamic router is a computer with Windows 2000 Server and the Routing and Remote Access Service running the Routing Information Protocol (RIP) and Open Shortest Path First (OSPF) routing protocols for IP and RIP for IPX.

The Two Broad Types Of Networking Equipment

There are two main types of networking equipment; Data Communications Equipment (DCE) which is intended to act as the primary communications path, and Data Terminal Equipment (DTE) which acts as the source or destination of the transmitted data.

Data Terminal Equipment

DTE devices were originally computer terminals located at remote offices or departments that were directly connected modems. The terminals would have no computing power and only functioned as a screen/keyboard combination for data processing.

Nowadays most PCs have their COM and Ethernet ports configured as if they were going to be connected to a modem or other type of purely networking-oriented equipment.

Data Communications Equipment

A DCE is also known as Data Circuit-Terminating Equipment and refers to such equipment as modems and other devices designed primarily to provide network access.

Using Straight-Through/Crossover Cables to Connect DTEs And DCEs

When a DCE is connected to a DTE, you will need a straight-through cable. DCEs connected to DCEs or DTEs connected to DTEs require crossover cables. This terminology is generally used with Ethernet cables.

The terminology can be different for cables used to connect serial ports together. When connecting a PC's COM port (DTE) to a modem (DCE) the straight-through cable is frequently called a modem cable. When connecting two PCs (DTE) together via their COM ports, the crossover cable is often referred to as a null modem cable.

Some manufacturers configure the Ethernet ports of their networking equipment to be either of the DTE or the DCE type, and other manufacturers have designed their equipment to flip automatically between the two types until it gets a good link. As you can see, confusion can arise when selecting a cable. If you fail to get a link light when connecting your Ethernet devices together, try using the other type of cable.

A straight-through Ethernet cable is easy to identify. Hold the connectors side by side, pointing in the same direction with the clips facing away from you. The color of the wire in position #1 on connector #1 should be the same as that of position #1 on connector #2. The same would go for positions #2 through #8, that is, the same color for corresponding wires on each end. A crossover cable has them mixed up. Table 2-3 provides some good rules of thumb.

Table 2-3: Cabling Rules of Thumb

Scenario Likely Cable Type
PC to PC Crossover
Hub to hub Crossover
Switch to switch Crossover
PC to modem Straight-Through
PC to hub Straight-Through
PC to switch Straight-Through

Network Interface Cards


Network Interface Cards

Your network interface card is also frequently called a NIC. Currently, the most common types of NIC used in the home and office are Ethernet and wireless Ethernet cards.

The Meaning of the NIC Link Light

The link light signifies that the NIC card has successfully detected a device on the other end of the cable. This indicates that you are using the correct type of cable and that the duplex has been negotiated correctly between the devices at both ends.

Duplex Explained

Full duplex data paths have the capability of allowing the simultaneous sending and receiving of data. Half duplex data paths can transmit in both directions too, but in only one direction at a time.
Full duplex uses separate pairs of wires for transmitting and receiving data so that incoming data flows don't interfere with outgoing data flows.
Half duplex uses the same pairs of wires for transmitting and receiving data. Devices that want to transmit information have to wait their turn until the "coast is clear" at which point they send the data. Error-detection and data-retransmission mechanisms ensure that the data reaches the destination correctly and are specifically designed to remedy data corruption caused when multiple devices start transmitting at the same time.
A good analogy for full duplex communications is the telephone, in which both parties can speak at the same time. Half duplex on the other hand is more like a walkie-talkie in which both parties have to wait until the other is finished before they can speak.
Data transfer speeds will be low and error levels will be high if you have a device at one end of a cable set to full duplex and a device at the other end of the cable set to half duplex.
Most modern network cards can autonegotiate duplex with the device on the other end of the wire. It is for this reason that duplex settings aren't usually a problem for Linux servers.

The MAC Address

The media access control (MAC) address can be equated to the serial number of the NIC. Every IP packet is sent out of your NIC wrapped inside an Ethernet frame that uses MAC addresses to direct traffic on your locally attached network.
MAC addresses therefore have significance only on the locally attached network. As the packet hops across the Internet, its source/destination IP address stays the same, but the MAC addresses are reassigned by each router on the way using a process called ARP.

How ARP Maps the MAC Address to Your IP Address

The Address Resolution Protocol (ARP) is used to map MAC addresses to network IP addresses. When a server needs to communicate with another server it does the following steps:
  1. The server first checks its routing table to see which router provides the next hop to the destination network.
  2. If there is a valid router, let's say with an IP address of 192.168.1.1, the server checks its ARP table to see whether it has the MAC address of the router's NIC. You could very loosely view this as the server trying to find the Ethernet serial number of the next hop router on the local network, thereby ensuring that the packet is sent to the correct device.
  3. If there is an ARP entry, the server sends the IP packet to its NIC and tells the NIC to encapsulate the packet in a frame destined for the MAC address of the router.
  4. If there is no ARP entry, the server issues an ARP request asking that router 192.168.1.1 respond with its MAC address so that the delivery can be made. When a reply is received, the packet is sent and the ARP table is subsequently updated with the new MAC address.
  5. As each router in the path receives the packet, it plucks the IP packet out of the Ethernet frame, leaving the MAC information behind. It then inspects the destination IP address in the packet and use its routing table to determine the IP address of the next router on the path to this destination.
  6. The router then uses the "ARP-ing" process to get the MAC address of this next hop router. It then reencapsulates the packet in an Ethernet frame with the new MAC address and sends the frame to the next hop router. This relaying process continues until the packet reaches the target computer.
  7. If the target server is on the same network as the source server, a similar process occurs. The ARP table is queried. If no entry is available, an ARP request is made asking the target server for its MAC address. Once a reply is received, the packet is sent and the ARP table is subsequently updated with the new MAC address.
  8. The server will not send the data to its intended destination unless it has an entry in its ARP table for the next hop. If it doesn't, the application needing to communicate will issue a timeout or time exceeded error.
  9. As can be expected, the ARP table contains only the MAC addresses of devices on the locally connected network. ARP entries are not permanent and will be erased after a fixed period of time depending on the operating system used.
Chapter 3, "Linux Networking", which covers Linux network topics, shows how to see your ARP table and the MAC addresses of your server's NICs.

Common ARP Problems When Changing A NIC

You may experience connectivity problems if you change the MAC address assigned to an IP address. This can happen if you swap a bad NIC card in a server, or replace a bad server but have the new one retain the IP address of the old.
Routers typically save learned MAC to IP address map entries in a cache and won't refresh them unless a predefined period of time has elapsed. Changing the NIC, while retaining the IP address can cause problems as the router will continue to send frames onto the network with the correct target IP address but the old target MAC address. The server with the new NIC won't respond as the frame's target MAC doesn't match it's own.
This problem can be fixed in one of two ways. You can delete all the ARP entries in the router's cache. The second solution is to log into the server's console and ping it's gateway. The router will detect the MAC to IP address change and it will readjust its ARP table.