Networking Tool – traceroute

In this post I am going to talk about a tool used in network trouble shooting and analyzing.

Traceroute

  1. It gives the entire path that a packet travels
  2. It will name the routers and the devices in the packet travel path
  3. It will give the specific time taken to send or receive data to each devices on the path ot the network latency.

How it works?

Each IP packet that we send on the internet has got a field called TTL – Time to Live, Which is not the seconds. It is actually the number of hops. Its the maximum number of hops a packet can travel through the internet, before its discarded.

Hops can be anything like computers, routers, or any other devices between the source and the destination.

If there is no TTL in the packet, then it will travel endlessly from one to the another and so on forever searching for the destination. So basically TTL value is set by the sender inside his IP Packet.

If the destination is not found after travelling through too many routers in between and if TTL value comes to 0, the receiving router will drop the packet and informs the original sender. Original sender will be informed that the TTL Value Exceeded and it cannot forward the packet further.

Let’s say i need to reach 8.8.8.8. IP address, and my default TTL value is 30 hops. Which means it can travel a maximum of 30 hops to reach the destination, before which the packet is dropped.

But how will the routers in between determine the TTL value limit has reached. Each router that comes in between the source and destination will go on reducing the TTL value before sending to the next router. Which means if I have a default TTL value of 30, then my first router will reduce it to 29 and then send that to the next router across the path.

The receiving router will make it 28 and send to the next and so on. If a router receives a packet with TTl of 1 (which means no more further traveling, and no forwarding ), the packet is discarded.But the router which discards the packet will inform the original sender that the TTL value has exceeded.!

Hence when an ICMP TTL exceeded message is sent by a router, the original sender will come to know the address of the router.

Traceroute makes use of this TTL exceeded messages to find out routers that come across your path to destination (Because these exceeded messages send by the router will contain its address)

How traceroute uses “TTL exceeded message” to show us how the packets travel through?

TTL exceeded messages are only send by the router that receives a packet with TTL of 1. Every router in between you and your receiver will not send TTL exceeded message. Then how will it find the address of all the routers/hops in between you and your destination. Because the main purpose of Traceroute is to identify the hops between you and your destination.

But you can exploit this behavior of sending TTL exceeded messages by routers/hops in between by purposely sending an IP packet with a TTL value of 1.

Let’s try to traceroute to Google’s Public DNS 8.8.8.8. Here is the result will look like.

When I issues this command “traceroute -n 8.8.8.8”, my computer make a UDP packet (default behavior, but we can change it to TCP or ICMP). This contain the following items

  1. Source IP Address
  2. Destination IP Address
  3. A invalid UDP port number, basically it will be from  33434 to 33534.

Here is what happening in the background:

  1. A packet will be made with source address with the destination IP address of 8.8.8.8 and a destination port number between 33434 to 33534. And the TTL Value is 1
  2. Packet will reach the gateway server. On receiving the packet gateway server will reduce the TTL by 1. Once the TTL is reduced by the value of 1 (1-1= 0), the TTL value becomes zero. Hence the gateway server will send back the TTL Time exceeded message and with a header about the gateway server information
  3. So My traceroute program will receive that TTL Time exceeded message, and get to know about the source address and other details about the first hop and that is my gateway server
  4. This time now my traceroute program will send the same UDP packet with the destination of 8.8.8.8, and a random UDP destination port between 33434 to 33534. And with the TTL Value as 2. Why because my gateway router will reduce it by 1 and then forward the packet to the next hop/router
  5. The next hop after the gateway server will reduce the TTL value from 1 to 0 and it will send the TTL Time Exceeded message and a header with that router information.
  6. Like that my traceroute program will keep on sending the packets with TTL value increased.
  7. When it reaches the final destination (original receiver) it will send the ICMP destination/Port unreachable message. This is happening because we are sending the packets to a unused port number. This is the indicator for the traceroute program that we reached the destination and stop sending further packets.

Actually each time this traceroute program will send 3 packets with the same TTL value and with diff port numbers. So that it can calculate the average time taken or time latency and show it to us. 

So that is how the traceroute programs works. I have summarized whatever I have understood based on the reading that I did about this utility on few networking related blogs.

Advertisements

Django Haystack – Searching for foreign key fields

Recently I have faced an issue with searching multiple models search when I was working on django-haystack engine with Whoosh.
For example consider this simple model
class UserProfile(models.Model):
    city = models.CharField(max_length=100)
    zip_code = models.PositiveIntegerField()

class UserProfessional(models.Model):
    userprofile = models.ForeignKey(UserProfile, related_name = 'user_professionals')
    company = models.CharField(max_length=200)
    bio_exp = models.CharField(max_length=300)

So I wanted to search by zip_code, and display the company, bio_exp in the results. I have spent some time on researching the Django haystack search_indexes and the guys from IRC #haystack helped me a lot in achieve this. Finally here is the solution for that: search_index.py

import datetime
from haystack import indexes
from user_accounts.models import UserProfile, UserProfessional

class UserProfessionalIndex(indexes.SearchIndex, indexes.Indexable):
    text = indexes.CharField(document=True,use_template=True)
    userprofile = indexes.CharField(model_attr='userprofile') 
    def get_model(self):
        return UserProfessional

In the above index you can find a field which has the argument use_template = True. This field identifies the below template file and it will create the search index.

Here is the template file created under our search template:

{{ object.user_professionals.zip_code }}
{{ object.user_professionals.city }} 
{{ object.bio_exp }}

So when we rebuild the index again, these foreign key elements will be added in the search index and you can search by zip_code and see the results displaying the company and bio_exp fields.

Django Text Search with Haystack and Whoosh

Recently, I have setup a text based search engine using django-haystack and Whoosh. Here, I would like to share the steps..

Haystack is a reusable app and also does pluggable backends (much like Django’s database layer), so virtually all of the code you write ought to be portable between whichever search engine you choose.

First I have installed Haystack and Whoosh using Python package manager pip

sudo pip install Whoosh
sudo pip install django-haystack

Then in the application directory.. In the settings.py file add the Haystack to the Install_apps configuration

INSTALLED_APPS = [
 'django.contrib.admin',
 'django.contrib.auth',
 'django.contrib.contenttypes',
 'django.contrib.sessions',
 'django.contrib.sites',
 # Added.
 'haystack',
 # Then your usual apps...
 'review',
 ]

Then add the Haystack specific settings in the settings.py file

 import os
 HAYSTACK_CONNECTIONS = {
 'default': {
 'ENGINE': 'haystack.backends.whoosh_backend.WhooshEngine',
 'PATH': os.path.join(os.path.dirname(__file__), 'whoosh_index'),
 },
 }

And My actual model.py looks like this..  http://pastebin.com/fSC5Zshq

Now to build a SearchIndex, all that’s necessary is to subclass both indexes.SearchIndex &indexes.Indexable, define the fields you want to store data with and define a get_model method.

We’ll create the following UserProfessionalIndex to correspond to our UserProfessional model. This code generally goes in asearch_indexes.py file within the app it applies to, though that is not required. This allows Haystack to automatically pick it up. The UserProfessionalIndex  should look like:

 import datetime
 from haystack import indexes
 from user_accounts.models import UserProfile, UserProfessional

 class UserProfileIndex(indexes.SearchIndex, indexes.Indexable):
     text = indexes.CharField(document=True,use_template=True)

     def get_model(self):
         return UserProfessional

     def index_queryset(self, using=None):
         """Used when the entire index for model is updated."""
         return self.get_model().objects.all()

Additionally, we’re providing use_template=True on the text field. This allows us to use a data template to build the document the search engine will index. You’ll need to create a new template inside your template directory called search/indexes/review/userprofessional_text.txt and place the following inside:

{{ object.company }}
{{ object.professional_type }}
{{ object.bio_exp }}

Setting Up The Views and Urls

Within your URLConf put in the following lines. This will pull in the default URL config for Haystack.

(r'^search/', include('haystack.urls')),

Search Template

Your search template (search/search.html by default) will likely be very simple. The following is enough to get going: http://pastebin.com/pB705iMW

Reindex
Simply run ./manage.py rebuild_index. You’ll get some totals of how many models were processed and placed in the index.

Complete!
Start the server, You can now visit the search section of your site, enter a search query and receive search results back for the query!

PXE Boot Server in Ubuntu 10.04 desktop

The Preboot eXecution Environment is an environment to boot computers using the network interface without the data storage devices (like cd-roms, USB Drives, hard disks). See Wikipedia

How it Works?

1) When client box is powered on, Client side BIOS scans for devices. It will try to load boot-loader from required device in Boot Order Sequence. Then it loads PXE from network card.

2) Client’s PXE boadcast a request for an IP. Then DHCP server reply with IP, and TFTP Server’s IP, pxelinux.0 file (PXE Loader) ..

3) PXE(client) requests for pxelinux.0, then it will load into RAM and provide the control to pxelinux.0

pxelinux.0 get boot configuration from TFTP server with Client MAC as file name. If it does not get it, It keeps requesting with a portion of MAC as filename till it drops to default file.

4) pxelinux.0 requests kernel and RAMDisk from TFTP server and load them to RAM.

5) Kernel loads the remaining parts from TFTP or NFS (Network File System)

Server Side
These steps were tested in Ubuntu 10.04 LTS version on a hardware machine. Target Operating System is Parted Magic, this is like any other Linux distribution.

1) Install necessary DHCP & TFTP packages

sudo apt-get install dhcp3-server openbsd-inetd lftp tftpd-hpa

2) DHCP Setup

  • Edit /etc/default/dhcp3-server, Ethernet interface for DHCP service

INTERFACES="eth0"

  • Edit /etc/dhcp3/dhcpd.conf, my DHCP service configuration looks like

default-lease-time 600;
max-lease-time 7200;
subnet 192.168.10.0 netmask 255.255.255.0 {
range 192.168.10.50 192.168.10.100;
option subnet-mask 255.255.255.0;
option routers 192.168.10.123;
option broadcast-address 192.168.10.255;
filename "pxelinux.0";
next-server 192.168.10.123;
}

  • Setup a static IP for eth0, 192.168.10.123
  • Start service

sudo /etc/init.d/dhcp3-server restart

  • Check status

netstat -lu

Output

Proto Recv-Q Send-Q Local Address Foreign Address State

udp 0 0 *:bootpc *:*

3) Setting up TFTP

  • Edit /etc/inetd.conf

tftp dgram udp wait root /usr/sbin/in.tftpd /usr/sbin/in.tftpd -s /var/lib/tftpboot

  • Enable boot service for inetd

sudo update-inetd –enable BOOT

  • Start service

sudo /etc/init.d/openbsd-inetd restart

sudo /etc/init.d/tftpd-hpa restart

  • Check status

Output
Proto Recv-Q Send-Q Local Address Foreign Address State
udp 0 0 *:tftp *:*

4) Setting up PXE boot files

  • Download Parting Magic distribution from here

Unzip pmagic-pxe-4.5.zip. Copy your local /usr/lib/syslinux/pxelinux.0 to /var/lib/tftpboot and keep the files in following structure to make it work

/var/lib/tftpboot/
|---- pxelinux.0
|---- pxelinux.cfg/
        |---- default
|--- pmagic/
        |---- bzimage
        |---- initramfs

  • Edit /var/lib/tftpboot/pxelinux.cfg/default, Change these paths accordingly: pmagic/bzimage & pmagic/initramfs

DEFAULT partmagic
LABEL partmagic
MENU LABEL PartMagic 4.5
KERNEL pmagic/bzimage
APPEND initrd=pmagic/initramfs edd=off noapic load_ramdisk=1 prompt_ramdisk=0 rw vga=791 sleep=10 loglevel=0 keymap=us livemedia

  • Setup correct permissions

sudo chmod -r 644 /var/lib/tftpboot/*

Client Part

Choose network boot using F9 or F12 key(it depends on your machine) for temporary access or modify the BIOS configuration for permanent settings to make the network booting as first preference.

Demo 🙂

Openstack in a VM – Setting Up Devstack IceHouse On Your Local Network behind Proxy

I was trying out setting up DevStack IceHouse on my Ubuntu VM behind HP’s Proxy. Here is the steps

1) Created a new Ubuntu 12.04 VM and configured the proxy.
2) git clone https://github.com/openstack-dev/devstack.git
3) If you behind the proxy, set the proxy in following configs
1) /etc/apt/apt.conf
Acquire::http::proxy "http://x.x.x.x:xxxx/";
Acquire::ftp::proxy "ftp://x.x.x.x:xxxx";
Acquire::https::proxy "https://x.x.x.x:xxxx";
2) Set environment variables – do it for both capital and small cases
export http_proxy=x.x.x.x:xxxx
export https_proxy=x.x.x.x:xxxx
export HTTP_PROXY=x.x.x.x:xxxx
export HTTPS_PROXY=x.x.x.x:xxxx
2) git config –global http.proxy x.x.x.x:xxxx
3) alias curl="curl -x http://x.x.x.x:xxxx"

4) Change the git base url
The solution is to modify the sourcerc file in the devstack installation folder to use https instead of git. You have to look for that line and change it. This file is also known as the local.conf file.

Default setting in sourcerc file:

GIT_BASE=${GIT_BASE:-git://git.openstack.org}
Modified setting that should bypass git restrictions:

GIT_BASE=${GIT_BASE:-https://git.openstack.org}

5) The following commands will install IceHouse in less than 40 mins

cd devstack; ./stack.sh

How SSH Works and How to use SSH keys to do password less authentication?

How does SSH works?

Here is the guide on how does ssh works with the focus of Private/Public key pairs..
Here are the steps happening behind
1) When I types ssh 10.0.0.1,
2) 10.0.0.1 send its public key to me
3) My Computer see that this key is not present in the trusted list
4) My computer prompts me to whether to add to to the trusted list and I added
5) My computer uses 10.0.0.1’s public key to encrypt the username and password, and my computer includes’s its public key with this transmission
6) 10.0.0.1 receives the packet sent from my computer and it uses it’s own private key to decrypt the information.
7) Then server uses my machine’s public key to encrypt the message sent by the 10.0.0.1.
8) And finally I got the "Successful Login message"

How to use SSH keys to connect without a password:

By default ssh will ask you for the password. If we are using public/private key pair while ssh-ing to remote server.. then we can skip the password.

Here are the steps

1) Suppose I am trying to connect to the server 10.2.2.1
2) 10.2.2.1’s ssh server will encrypt a short message with my public key and send it to my machine
3) My ssh client will decrypt the message with its private key and sends back to the server
4) Then 10.2.2.1’s ssh server will be verify the reply and if it matches.. then it will show green signal and grant the access immediately.

For more info: http://docs.slackware.com/howtos:security:sshkeys