Blog Archives

UNIX Flavors (Distributions)

Which operating system is best for web hosting?

It seemed a simple enough topic, or so this web hosting novice thought. So I went through countless of sites in search of the answer and came up with a list of Web Hosting Operating Systems to choose from. Whereupon I concluded that there really wasn’t a system that would prove ‘best’ for all. It was, for the most part, simply a matter of needs, and of course, of preference, both from the web host’s and the web master’s points of view.

That takes care of that! Right? Well, not quite. I realized, in the course of my research, that from this list, another ‘list’ begged to be made. A list, that seemed necessary if one were to make an informed choice when it comes to operating systems.

This list, of course, is that of the many Unix ‘flavors’ available in the market. Unless you’re an expert, or simply a fanatic, chances are the concept of Unix having ‘flavors’ came as a surprise. Who knew flavors could apply to things other than ice cream, or food for that matter?

So what exactly is a Unix flavor?

this blog  defines it as an implementation of Unix, with each flavor, designed to work with different types of hardware, and having its own unique commands or features. The UGU site provides one of the more comprehensive lists of Unix flavors, but for those who don’t feel like going though all those links, below is an overview of the more popular ones.

Flavors that are available commercially (read: sold) include:

Solaris – Sun Microsystems’ implementation, of which there are different kinds available: these are Solaris OS for SPARC platforms, Solaris OS for x86 platforms, and Trusted Solaris for both SPARC & x86 platforms; the latest version is Solaris 10 OS

AIX – short for Advanced Interactive eXecutive; IBM’s implementation, the latest release of which, is the AIX 5L version 5.2.

SCO UnixWare and OpenServer – are implementations derived from the original AT&T Unix® source code acquired by the Santa Cruz Operation Inc. from Novell, and later on bought by Caldera Systems; the latest versions are UnixWare 7.1.3 and OpenServer 5.0.7

BSD/OS – the Berkeley Software Distribution (BSD) Unix implementation from Wind River; its latest version is the BSD/OS 5.1 Internet Server Edition

IRIX – the proprietary version of Unix from Silicon Graphics Inc.; the latest release of which is IRIX 6.5

HP-UX – short for Hewlett-Packard UniX; the latest version is the HP-UX 11i

Tru64 UNIX – the Unix operating environment for HP AlphaServer systems; Tru64 UNIX v5.1B-1 is the latest version

Mac OS – Mac operating system from Apple Computer Inc. having a Unix core; the latest version is the Mac OS X Panther

Flavors that are available for free, include:

FreeBSD – derived from BSD, it is an advanced OS for x86 compatible AMD64, Alpha, IA-64, PC-98 and UltraSPARC® architectures; the latest versions are FreeBSD 5.2.1 (New Technology Release) and the FreeBSD 4.9 (Production Release)

NetBSDUnix-like OS derived from BSD and developed by The NetBSD Project; it is shipped under a BSD license and the latest release is NetBSD 1.6.2

OpenBSD – multi-platform 4.4BSD-based Unix-like OS from The OpenBSD project; its latest release is OpenBSD 3.4

Linux — a Unix-type OS originally created by Linus Torvalds, the source code of which is available freely and open for development under GNU General Public License; there are numerous Linux distributions available

A more detailed discussion of these flavors will be provided in future postings, so do come back soon.


Free in this case means that the software is free (to use), but does not necessarily mean that users won’t shell out money to get their own copy(ies). Suppliers may charge a nominal fee for materials used to copy/distribute these (i.e. CDs) and for shipping (if applicable).

BSD license simply put means that users are allowed to develop products based on NetBSD without the changes having to be made public

Although Linux has traditionally been freely available, the ongoing case by SCO against IBM and the rest of the Linux community might change this. A more detailed posting will be made on this topic in the coming days.

5 Linux Network Monitoring Tools


Linux networking monitoring tools work on all networks– Linux, BSD, Mac, Unix, and Windows.
Monitoring traffic on your network is only as important as the data and computers you want to protect. Understanding how to do basic network troubleshooting will save you both in wasted time and money. Every Linux operating system comes with a number of command line tools to help you diagnose a network problem. In addition, there are any number of open source tools available to help you track down pesky network issues.

Knowing a few simple commands and when to use them will help you get started as a network diagnostic technician. We’ll use Ubuntu 10.04 desktop as our test platform, although all of these work in other distros as well.

Good Old Ping

If you’re uncomfortable using the Linux command line from a terminal, you might as well stop reading at this point or at least skip to the other applications. In reality, there’s nothing to be afraid of when it comes to the Linux command line, especially when it comes to diagnosing a network problem. Most commands simply display information that can help you determine what’s happening. Some will require root permissions or at least the ability to issue the sudo command.

First and foremost is the ifconfig command. Typing this at a command prompt will display information about all known network devices. In the example below you can see eth0, lo and wlan0. These correspond to a wired Ethernet device (assigned address, the lo or loopback connection, and a wireless Ethernet device (address It also shows the mac address of the device (HWaddr) and some statistics about the traffic. This should be your first command if you’re having network troubles to see if you have a valid IP address and if you see any traffic counts or errors.

The ping command should be your second tool of choice to determine if your computer is communicating with the outside world. Issuing a ping command to a known address (like will quickly show if you have connectivity or not. It will also show you the time it took for the ping command to complete. Typical ping times for a DSL-type connection should be somewhere around 50 ms.

After the first two you should probably use the route command. This will show a list of IP addresses including the Destination and Gateway addresses connected to each interface along with some additional information including a Flags column. This column will have the letter G on the line associated with your default gateway. You can use this address in a ping command to determine if your machine has connectivity with the gateway.


EtherApe is available for download from the Ubuntu Software Center. It uses GNOME and libpcap to present a graphical map of all network traffic seen by the selected interface. After installation you should see the EtherApe icon under the Applications / System Tools menu. When we ran it this way, it wasn’t able to open any of the network devices as this requires root access. We were able to get it to run from the command line using sudo as follows:

$ sudo etherape

Once you have the program running it should start displaying a graphical representation of the traffic seen on the default Ethernet interface. You can select a specific device if your computer has multiple Ethernet interfaces using the Capture / Interfaces menu. EtherApe also has the ability to view data from a saved pcap file and show traffic by protocol.


Nmap is a widely used security scanner tool originally released in 1997. It uses a variety of special packets to probe a network for any number of purposes including creating an IP map of addresses, determining the operating system of a specific target IP address and probing a range of IP ports at a specific address. One of the most basic issues is to do what’s called a ping sweep, meaning a series of ping commands to determine what addresses have computers attached to them. This can be accomplished with the following command:

$ nmap -sP

There are a number of graphical applications available from the Ubuntu Software Center that use nmap as the engine and then display the results in a more user-friendly way. These include NmapSI4, which uses a Qt4 interface, and Zenmap.


Capturing network traffic for further analysis is the primary function of tcpdump. Actually, the packet capturing is accomplished by libpcap while the actual presentation and analysis is done with tcpdump. Raw Ethernet data is stored in the pcap file format for further examination. This same file format is used by other packet analysis tools such as Wireshark.
Email Article
Print Article
Share Articles

A typical tcpdump command to capture basic traffic would be:

$ sudo tcpdump nS

The sudo is required to gain access to the default Ethernet device. This command will display basic information including time, source and destination addresses and packet type. It will continue displaying information in the terminal until you press control-C. Tcpdump is the best and fastest way to capture network traffic to a file. A typical command to accomplish this would be:

$ sudo tcpdump s w pktfile.pcap


Wireshark, formerly known as Ethereal, has become the tool of choice for many, if not most, network professionals. (Ubuntu users will find it in the Ubuntu Software Center under the Internet tab.) As with some of the other tools, we had to launch Wireshark from the command line using sudo to get it to see the available Ethernet devices. Once launched you should see a list of available interfaces on the left-hand side of the main window. Selecting one of the available interfaces or the virtual interface that collects packets from all Ethernet devices will bring up the protocol display page.


Wireshark provides a wealth of information about the captured traffic along with tools to filter and display based on any number of criteria including source or destination address, protocol, or error status. The Wireshark homepage has links to video tutorials, white papers and sample data to help get you started in network sleuthing.

Linux is an ideal platform to learn network troubleshooting techniques. It offers a wide array of command line and GUI tools to analyze and visualize your network traffic.


Top 10 Linux Virtualization Software

Virtualization is the latest buzz word. You may wonder computers are getting cheaper every day, why should I care and why should I use virtualization? Virtualization is a broad term that refers to the abstraction of computer resources such as:

  • Platform Virtualization
  • Resource Virtualization
  • Storage Virtualization
  • Network Virtualization
  • Desktop Virtualization

Why should I use virtualization?

  • Consolidation – It means combining multiple software workloads on one computer system. You can run various virtual machines in order to save money and power (electricity).
  • Testing – You can test various configuration. You can create less resource hungry and low priority virtual machines (VM). Often, I test new Linux distro inside VM. This is also good for students who wish to learn new operating systems and programming languages / database without making any changes to working environment. At my work place I give developers virtual test machines for testing and debugging their software.
  • Security and Isolation – If mail server or any other app gets cracked, only that VM will be under control of the attacker. Also, isolation means misbehaving apps (e.g. memory leaks) cannot bring down whole server.

Open Source Linux Virtualization Software

  1. OpenVZ is an operating system-level virtualization technology based on the Linux kernel and operating system.
  2. Xen is a virtual machine monitor for 32 / 64 bit Intel / AMD (IA 64) and PowerPC 970 architectures. It allows several guest operating systems to be executed on the same computer hardware concurrently. XEN is included with most popular Linux distributions such as Debian, Ubuntu, CentOS, RHEL, Fedora and many others.
  3. Kernel-based Virtual Machine (KVM) is a Linux kernel virtualization infrastructure. KVM currently supports native virtualization using Intel VT or AMD-V. A wide variety of guest operating systems work with KVM, including many flavours of Linux, BSD, Solaris, and Windows etc. KVM is included with Debian, OpenSuse and other Linux distributions.
  4. Linux-VServer is a virtual private server implementation done by adding operating system-level virtualization capabilities to the Linux kernel.
  5. VirtualBox is an x86 virtualization software package, developed by Sun Microsystems as part of its Sun xVM virtualization platform. Supported host operating systems include Linux, Mac OS X, OS/2 Warp, Windows XP or Vista, and Solaris, while supported guest operating systems include FreeBSD, Linux, OpenBSD, OS/2 Warp, Windows and Solaris.
  6. Bochs is a portable x86 and AMD64 PC emulator and debugger. Many guest operating systems can be run using the emulator including DOS, several versions of Microsoft Windows, BSDs, Linux, AmigaOS, Rhapsody and MorphOS. Bochs can run on many host operating systems, like Windows, Windows Mobile, Linux and Mac OS X.
  7. User Mode Linux (UML) was the first virtualization technology for Linux. User-mode Linux is generally considered to have lower performance than some competing technologies, such as Xen and OpenVZ. Future work in adding support for x86 virtualization to UML may reduce this disadvantage.

Proprietary Linux Virtualization Software

  1. VMware ESX Server and VMWare Server – VMware Server (also known as GSX Server) is an entry-level server virtualization software. VMware ESX Server is an enterprise-level virtualization product providing data center virtualization. It can run various guest operating systems such as FreeBSD, Linux, Solaris, Windows and others.
  2. Commercial implementations of XEN available with various features and support.
  • Citrix XenServer : XenServer is based on the open source Xen hypervisor, an exceptionally lean technology that delivers low overhead and near-native performance.
  • Oracle VM : Oracle VM is based on the open-source Xen hypervisor technology, supports both Windows and Linux guests and includes an integrated Web browser based management console. Oracle VM features fully tested and certified Oracle Applications stack in an enterprise virtualization environment.
  • Sun xVM : The xVM Server uses a bare-metal hypervisor based on the open source Xen under a Solaris environment on x86-64 systems. On SPARC systems, xVM is based on Sun’s Logical Domains and Solaris. Sun plans to support Microsoft Windows (on x86-64 systems only), Linux, and Solaris as guest operating systems.

        3.  Parallels Virtuozzo Containers – It is an operating system-level             virtualization product designed for large-scale homogenous server environments and data centers. Parallels Virtuozzo Containers is compatible with x86, x86-64 and IA-64 platforms. You can run various Linux distributions inside Parallels Virtuozzo Containers.

N.Y.S.E. Places Buy on Linux, Hold on Unix

Published: December 14, 2007

The New York Stock Exchange is investing heavily in x86-based Linux systems and blade servers as it builds out the NYSE Hybrid Market trading system that it launched last year. Flexibility and lower cost are among the goals. But one of the things that NYSE Euronext CIO Steve Rubinow says he most wants from the new computing architecture is technology independence.

“What we want is to be able to take advantage of technology advances when they happen,” Rubinow said. “We’re trying to be as independent of any technologies as we can be.”

The Hybrid Market system lets NYSE traders buy and sell stocks electronically or on the exchange’s trading floor. The NYSE has been turning to x86 technology to power the trading system, largely using servers from Hewlett-Packard Co., the two companies announced this week.

The NYSE has installed about 200 of HP’s ProLiant DL585 four-processor servers and 400 of its ProLiant BL685c blades, all running Linux and based on dual-core Opteron processors from Advanced Micro Devices Inc. In addition, the stock exchange is using HP’s Integrity NonStop servers, which are based on Intel Corp.’s Itanium processors and run the fault-tolerant NonStop OS operating system, as well as its OpenView management software.

Rubinow said that Linux is mature enough to meet his needs. The open-source operating system may not have all the polish of Unix technologies with 20-plus years of history behind them, “but it’s polished enough for us,” he said.

The NYSE’s shift toward Linux and x86-based hardware illustrates why consulting firm Gartner Inc. is predicting a slight decline in Unix server revenues over the next five years. In comparison, Gartner forecasts strong sales growth for both Windows and Linux servers.

Although Rubinow has the option of using HP-UX, HP’s version of Unix, he said that he’d prefer not to. “We don’t want to be closely aligned with proprietary Unix,” he said. “No offense to HP-UX, but we feel the same way about [IBM’s] AIX, and we feel the same way to some extent about Solaris.”

The NYSE still runs numerous Unix systems, especially ones with Solaris, which is Sun Microsystems Inc.’s Unix derivative. Rubinow acknowledged that Solaris has the ability to run on multiple hardware platforms, including x86-based systems from Sun server rivals such as HP. But he added that he thinks Linux “affords us a lot of flexibility.”

One technology that the NYSE isn’t adopting so eagerly is server virtualization, which comes with a system latency price that Rubinow said he can’t afford to pay. In a system that is processing hundreds of thousands of transactions per second, virtualization produces “a noticeable overhead” that can slow down throughput, according to Rubinow. “Virtualization is not a free technology from a latency perspective, so we don’t use it in the core of what we do,” he said.

Charles King, an analyst at Pund-IT Inc. in Hayward, Calif., believes there is a broader concern among IT managers about virtualization overhead and its impact on transaction processing. “It’s one of the reasons why even the staunchest advocates of x86 virtualization recommend extensive testing prior to moving systems into production,” King said.

%d bloggers like this: