A Computer is a tool that people use to achieve a goal, just like any other tool that we use for example a hammer to knock in nails. A Computer in its simplest form is a box full of switches. These switches can have two possible states, On or Off. That is why a Computer is known as a ''Two State Electronic Device''. Most people assume that Computers are intelligent, but this is not true. Computers are really thick, they can''t do anything without being told to do so, and when a Computer does something wrong, it is not usually the Computer that is wrong but either the person using the Computer, or the person who programmed the software.ling fluent

Computer hardware has four major components viz., CPU, input devices, output devices and memory.afslankpillen

Central Processing Unit (CPU) in computer is analogous to brain in humans. It has two major parts, viz., ALU (Arithmetic and Logic unit) and CU (control unit). All arithmetic and logical operations are performed by ALU. Control unit (CU) generates the control signals, which are used to control the operations of ALU and data, transfer. CPU executes all the instructions received from user. All other devices are just to support the CPU. Processing speed of a CPU is measured in MIPS (million instructions per second) and its clock frequency in HZ.varikosette

Input Devices
The computer will be of no use if it is not communicating with the external world. Thus a computer must have a system to receive data from outside world and to communicate results to external world. Input devices are used for transferring user commands or choice or data to the computer.el macho

Various input devices are:
It is one of the most common input devices used to input data and commands to computer in alphanumeric form. The layout of Keyboard is like that of a traditional typewriter.big bust crema

Mouse is a palm sized device, which can be moved on a smooth surface to simulate the movement of cursor that is desired on the display screen.zerosmoke használata

Scanners facilitate capturing of information and storing them in graphic format for displaying back on the graphical screen. Scanners consist of two components, the first one to illuminate the page so that optical image can be captured and the other to convert it into digital format so that it can be stored.
Output devices
The output normally can be produced in two ways: either on a display unit / device or on a paper. One of the most important peripherals in computer is the graphic display device. Graphic display is made up of a series of dots called ''pixels'' whose pattern produces the image.varikosette cena

The various categories of display devices and terminals are:

Cathode ray tube
The main components of a CRT terminal are the electron gun, the electron beam controlled by an electro-magnetic field and a phosphor coated display screen. Electron gun produces the electrons; electromagnetic field accelerates and focuses the electron beam; phosphor coated display screen translates the electron beam into visual information.

Liquid crystal displays
The major advantage of LCD is the low energy consumption. This does not have color capability and the image quality is relatively poor. Therefore, the use is mainly in portable devices like calculator, wristwatch etc.

Printers are used for producing hard copy i.e, output on paper. There are a large variety of printing devices, which can be classified according to the print quantity and the printing speeds.

Various types of printers are:
· Dot matrix printers.
· Line printers.
· Laser printers.
Plotters are used to produce the graphical output on paper. Plotters are mainly used for huge graphics and animations.

The Memory inside a Computer is what the CPU uses as a work area. The more memory the Computer has, the more it can do in a shorter time. When the Computer uses up all its memory, sometimes it cannot do the task asked of it. In other cases if it runs out of memory, it will start to use the space on the Hard Disk. This area on the Hard Disk is called a ''swapfile'', the Computer copies data onto the disk that it is not using at that time. This makes room for more data that it requires, then it copies it to and from the Computer memory and the disk drive, and this has the effect of virtually giving the Computer more memory that it really has. This is why it gets the name ''swapfile''. Unfortunately if the Computer uses the swapfile a lot, it can severely reduce the speed in which the task could be achieved, because real memory is much faster than using the swapfile. You may notice while you are using your Computer, that the disk drive suddenly starts to come on and off, what is happening is the Computer is either writing data to the swapfile, or it is cleaning up data from the swapfile it no longer needs.

A computer system is considered to consist of three groups of memories. These are
· Inter Processor memory
· Primary memory or main memory
· Secondary memory or Auxiliary memory
· Cache memory

Inter processor memory
These consist of small set of high-speed registers which are internal to a processor and are used as temporary locations where actual processing is done.

Primary memory or Main memory
It is larger but slower than processor memory. This memory is directly accessed by the processor. It is based on Integrated Circuits.

Secondary memory/Auxiliary memory/backing store
This memory, infect is the largest but slowest of all other memories. It mainly stores system programs, other instructions, programs and data files. This is not directly addressed by the processor.

Cache memory
Slower than Internal memory but faster than the other memories. Stationed between Internal processor memory and main memory. Its main purpose is maximum utilization of processing capability of processor.

There are three main types of disk drives on most modern Computers.
They are as follows:

3.5in Floppy Disk
A 3.5 in Floppy Disk is known as ''removable media''. This is because the disk can be removed easily, and transported to another Computer, the downside to a Floppy Disk is that the storage space is quite limited, usually 1.44 megabytes.

The Hard Disk
The Hard Disk is known as ''permanent media''. The Hard Drive lives inside your Computer and you are not meant to remove it and place it in another Computer. The Hard Drive has many advantages over its Floppy counterpart, these are in the speed it can save and read data, and on the size of storage space, the average Hard Disk today is around 1.2 Gigabytes, almost 1000 times greater than a Floppy.

A CD-ROM is known as ''removable media'' just like the Floppy Disk. However, the CD-ROM Disc can store up to 650 megabytes on one CD as opposed to 1.44 on a single Floppy Disk. The CD-ROM is much faster to use than a Floppy Disk, and it has many more uses. Today you can get whole encyclopedias, interactive games and videos on CD-ROM. The only downside to a CD-ROM is that on standard Computer systems, the CD-ROM is known as ''read only''. This means that you cannot store files on the CD-ROM, but you can use the files on it, or copy the files to another type of media such as your Hard Disk. However it is now possible to purchase as an extra, a CD-ROM Drive that can write to the disc, these are known as ''CD-ROM Writers''. Depending on the type of drive you buy, you may only write to the CD-ROM once, or if you were to buy the newer ''CD-ROM Re-Writer'' you could write to it in excess of 1000 times.

When you switch your Computer on, the Computer does a check of all the components that are connected to it, to see if they are all working correctly. As we said before, Computers are thick, and don''t do anything until they are told to. So how does the Computer know when to check itself? Well the Computer has a microchip that lives inside the Computer that tells it how to talk to the devices connected to it, this microchip also tells the Computer to check all the devices when it is first started up. This microchip is called the BIOS. BIOS stands for Basic Input Output System.
The BIOS checks the following components when it is first started up


The BIOS checks to see if all the memory in the Computer is working correctly. On some systems you can visually see this on the monitor. It looks like the Computer is counting, and that is exactly what it is doing, it is counting the number of bytes (the smallest measurable part of a Computers memory) to see if it all adds up correctly.

Disk Drives
The BIOS then checks to see if you have a Floppy and a Hard Disk drive connected. If they are working it then checks the Floppy Disk to see if there is a disk inserted in it. If there is then in some systems the Computer then tries to ''Boot'' from your floppy disk drive. If the Computer does try to boot from the Floppy Disk, it expects to find a disk that has system files on it (files that tell the Computer how to load the operating system). If it does not find these files on the disk, it will give an error message like ''Non System Disk Inserted, Replace and Strike a Key''. What this tells you to do is to either replace the disk with a disk that has the system files on it or just remove the disk and press a key. If you remove the disk and press a key, the Computer then checks your Hard Disk for the system files. If it cannot find them on the Hard Disk, the only option left is to try and find a Floppy Disk with the files on it, so you can put them back onto your Hard Disk. In most cases, unless something seriously went wrong, your Hard Disk will always have the system files on it. Once the Computer has found these files, it will load the Operating System. Most people will use the Microsoft Windows 3.1 or 95 Operating System.

· Computer Hardware.
· Computer Software.

Computer Hardware: Physical components of a computer which include Central Processing Unit (CPU), Monitor, Keyboard etc. compr4ise Computer hardware.

Computer Software: In the general term, it is basically a set of programs, which instructs the computer to perform a task as and when requested by the user.

Generations of computers are basically differentiated by a fundamental hardware technology.

First generation
The first generation computer used thousands of vacuum tubes (18000), weighed about 30 tons, occupied large space, huge power consumption and emitted excessive heat.
ENIAC (Electro Mechanical Integrator and Calculator), a first generation computer was developed by John P. Eckert and John W. Manchly in 1946.The trends which were encountered during the first generation computers were:

· CPU''s control was centralized.
· Use of virtual memory, main memory and index register started.
· Punched cards were used as input device.
· Magnetic tapes (possessing sequential access) and magnetic drums (possessing partly random and partly sequential access) were used as secondary storage devices.
· Machine was used for programming.

Second generation
Invention of transistor revolutionized the computers. It is cheaper, smaller and dissipates less heat than the vacuum tubes but can be utilized in a similar way as that of vacuum tubes. Transistors could perform up to 10,000 calculations per second. The second-generation computers were more advanced in terms of ALU and CU than their first generation counterparts. The transistors were the backbone of the second-generation computers.
Third generation
Use of IC''s (Integrated Circuits) in computer defines the third generation computers. In an IC the components such as transistors, resistors and capacitors are fabricated in a semiconductor material such as silicon. Initially only a few gates were integrated reliably on a chip and then packaged. These initial integration were referred to as small- scale integration. Later with the advances in microelectronics technology the SSI gave way to medium-scale (100''s of gates), Large-scale integration (1000''s of gates) and very large scale integration (1000,000''s of gates) were fabricated on a single chip. At present we are going too ultra-large-Scale Integration where 100,000,000 components can be fabricated on a single chip.

· Low cost.
· Greater operating speed.
· Better portability.
· Reliability.
· Reduced power consumption
The third generation computers used SSI chips.

Fourth generation
Fourth generation computers saw the advent of the microprocessors. A microprocessor is an entire CPU on a single chip and replaced many of the larger components of a computer. Microprocessors are flexible because of its programmability. The microprocessor allowed the computer to find its way on to people''s desktop. First microprocessor was built by INTEL in 1971.

Features of fourth generation computers are
· The fastest of all other generations.
· Required lesser amount of electricity.
· Heat dissipation is negligible.
· Accurate.
· Reliable.

Computers are classified under three main classes. These are:
· Microcomputers.
· Minicomputers.
· Mainframes.
· Super Computers.
Although with development in technology the distinction between all these is becoming blurred.

A microcomputer''s CPU is a microprocessor. The microcomputer originated in late 1970''s. The first microcomputers were built around 8-bit microprocessor chip. An improvement on 8-bit chip technology was seen in early 1980s, when a series of 16-bit chips namely 8086 and 8088 were introduced by Intel Corporation, each one with an advancement over the other. Similar to Intel''s chip series, exists another popular chip series of Motorola. The first microcomputer (16 bits) of Motorola series is 68000.

The term minicomputer originated in 1960s when it was realized that many computing tasks do not require an expensive contemporary mainframe computers but can be solved by a small, inexpensive computer. The minicomputer was used as a multi-user system, which can be used by various users at the same time. Gradually the architectural requirement of minicomputers grew and a 32-bit minicomputer, which was called super-mini, was introduced. The super-mini computers are fast in speed than the previous mini-computers.

Mainframe computers are generally 32-bit machines or on the higher side. These are suited to big organizations, to manage high volume applications. Mainframes are used as central host computers in distributed systems. Libraries of application programs developed for mainframes are much larger than those of micro and minicomputers. Mainframe computers are indispensable even with the popularity of microcomputers.

Super computers
The upper end of the state of art mainframe machines is the super computer. These are amongst the fastest machines in terms of processing speed and use multiprocessing techniques where a number of processors are used to solve a problem. The super computers are reaching up to speeds well over 25000 million arithmetic operations per second. Super computers are mainly being used for weather forecasting, remote sensing, image processing, bio-medical applications, etc.

Introduction to software

Computer without software is useless. Software is a set of programs designed to perform any specific task.

· Application Software
· System software

Application software
These software''s allow user(s) to perform a specific data processing on computer. Certain package of different companies for performing various tasks on computer is known as application software. Front end packages that are used to manage a database come in this category.

Popular software packages on pc''s are
· MS-Office: Used for document preparation and text processing.
· MS-Access: Used for Data base management.
· Netscape Navigator: Used for Web browsing.

System software
System software acts as interface between Application software and the computer hardware. Operating systems and translators are the main members of system software.

Operating system
System software is a set of programs that interacts the user with a computer and vice versa. Operating system is also responsible for managing the resources of a computer like printer, plotter, scanner etc. computer will not remain more than an electric machine without operating system. Whatever computer does is the result of operating system. Operating system makes computer to behave, as the user wants. Operating system manages all the resources besides managing certain operations of the computer. Various operating systems which are in existence these days include, MS-DOS, MS-Windows 95, 98, windows-NT, Unix, Linux, OS/2, etc.

DOS (Disk Operating System) was a variant CP/M (Control Program / Monitor), which ran for the first time on IBM-PC in 1981. It is called so because it resides on floppy or hard disk and provides command level interface between user and the computer hard ware. The different versions of MS-DOS have evolved over a period of time with Microsoft introducing new features in each new release. Starting with MS-DOS 1.1, the latest version was MS-DOS 6.22.

An instruction given to the computer to perform a specific task is called a command. The DOS has several commands, each for a particular and these are stored in DOS directory on the disk. The commands are of two types.
· Internal commands.
· External commands.

Internal commands
These are actually the inbuilt commands that are stored in command interpreter file ( Some internal commands are: Date, Time, Dir, Ver, etc.

External commands
These commands are always available to the user. These commands are separate programs that on execution behave like commands. These commands are actually utility programs. Some external commands are: Help, Doskey, Backup, Restore and Format etc.


Directory commands
DIR: To list the files on a specified disk or in a specified directory.
MD: To make a new directory.
CD: To change to a specified directory.
RD: To remove any specified but empty directory.
TREE: To display all the directory paths found in the specified drive.
PATH: Sets a sequential search path for the .exe files.

File management commands
COPY: Copies one or more files from one disk/directory to a specified disk/directory
XCOPY: Copies files and directories, including lower-level directories if they
DEL: Removes specified file from specified disk / drive.
REN: Renames any specified file.
ATTRIB: Sets or shows file attributes (Read, write, hidden).
RESTORE: Restores files that were backed up using Back-up command.
EDIT: Provides a full screen editor to edit a file.
FORMAT Formats a disk / drive.

General commands
TIME: Sets or displays the system time.
DATE: Sets or displays the system dates.
TYPE: Displays the contents of a file.
PROMPT: Customize the Dos command prompt.
HELP: Displays the help on that command.

The most visible change in windows 98 over earlier versions of windows is the new user interface. Windows is the boxed area that shows file names, icons etc. Windows 98 is a GUI based operating system. Unlike MS-DOS and other command line operating systems windows 98 presents all its command in the form of icons, which can be executed just by clicking over them.

An Icon is a picture. Windows 98 uses small video icons that represents objects, documents, applications, folders, devices and computers. An icon has a text label that further describes the object.

Selecting an object is pointing to it with out any further action. To select the object, move the mouse cursor over it and press the left mouse button once.

Drag and drop
To drag and drop an object onto another object, move the mouse button and hold it down the lift mouse button and hold it down while the cursor is moved to the destination. Release the mouse button and the object is moved.

Right mouse button
Right clicking on any object displays a menu with common commands.

Icon on Desktop
The upper left corner contains four main icons. These are My computer, Network neighbourhood, Recycle-bin and briefcase.

My computer
Opens a view into the resources of the local computer. The contents depend on the disk drives on your PC and the network support that is installed.

Network Neighborhood
The icon displays the computer and shared printers connected on the windows network.

Recycle bin
This icon receives all the deleted object which can be retrieved back by dragging it out of the Recycle bin and dropping it onto the desktop or in to a folder or they can be permanently deleted from the disk by choosing Empty Recycle Bin selection on the file menu.

The commonly used personal documents can be put or stored in the briefcase. This briefcase can be moved to a disk or copied across a network.

Folders on the desktop can contain other folders, documents, applications and shortcuts to devices such as printer. To add a folder to the desktop, move the cursor to an empty spot on the desktop and press the right mouse button. Click the folder command. A folder icon labeled "new folder" appears on the desk top.

The reference to the current documents are in the document object. The document list increased word processing documents, spreadsheets, database files, graphics file etc.

A program in start menu is used to quit from the windows session in an orderly manner so as to prevent corruption of programs and data.

Computer as an electronic device understands only the electronic language also called as binary language. It is very difficult but not impossible to program or instruct the computer in the electronic language. Therefore some computer codes that are very near to the natural language have been developed to make the interface between the user and computer easy. These codes are called as high level language. A set of instructions is given to the computer in any high level language code to get a task done. But computer as an electronic device is unable to understand the high level language codes. To overcome this mismatch another set of codes has been developed to translate this high level language into the machine understandable format. These new codes are called as language translators. Each computer high level language has its own translator.
The only difference between compilers, interpreters and assemblers lies in the way they translate the high level languages into the machine level or low level languages. Compilers translate the whole code in one go while interpreters do the translation line by line. Assemblers are specially designed to translate assembly language into the machine understandable language.


MS-word is a utility of Ms-office used as a word processor that can be used to create letters , memos , reports ,news letters and just about any other kind of document. A word processor is a software package which helps enter and edit a document much faster than the usual manual ways.
· Following functions are possible by using MS-WORD
· Typing out the document.
· Saving the document.
· Opening an existing document.
· Finding the words and replacing them with other words.
· Searching for specific words and make spelling checks.
· Printing the documents etc.

Creating a new document
There are two ways to create a new document. click on file menu and select New. It will open a dialog box which provides you different types of documents that you can create or click on the first icon of the tool bar. When new window is opened and you see a blinking cursor in the window, start typing your text. When you finish typing in your text , you may want to give different styles to your writing , this can be done with the help of format menu.

Opening an existing document
To open an existing document, click the open button on the standard toolbar. When the dialog box appears, select the document in the file name box and then choose OK.

Saving a document
To save a document on the disk , click the save as button on the standard toolbar and choose the disk on which you want to save the document.

Printing a document
To print a document , either click the printer button in toolbar or from file menu , choose print command.

Finding and replacing text
To find a particular text and to replace it , open edit from menu bar and choose find command from the drop down list, then type in the required word and click Find next.
Spelling check and grammatical errors
To make spelling checks and to detect and remove grammatical errors from the document , open tools menu from the menu bar and choose `spelling and grammar` from the drop down list.

Inserting pictures
To insert a required picture in the document , click on insert menu on the menu bar and choose picture command , this will take you to a sub menu from where you can choose the desired picture.

To quit MS-WORD
First, close all the open documents and click on file menu and choose Exit command.

Introduction to Networking

A basic understanding of computer networks is requisite in order to understand the principles of network security. In this section, we''ll cover some of the foundations of computer networking, then move on to an overview of some popular networks. Following that, we''ll take a more in-depth look at TCP/IP, the network protocol suite that is used to run the Internet and many intranets. Once we''ve covered this, we''ll go back and discuss some of the threats that managers and administrators of computer networks need to confront, and then some tools that can be used to reduce the exposure to the risks of network computing.

A ``network'''' has been defined as ``any set of interlinking lines resembling a net, a network of roads an interconnected system, a network of alliances.'''' This definition suits our purpose well: a computer network is simply a system of interconnected computers. How they''re connected is irrelevant, and as we''ll soon see, there are a number of ways to do this.

The International Standards Organization (ISO) Open Systems Interconnect (OSI) Reference Model defines seven layers of communications types, and the interfaces among them. (See Figure 1.) Each layer depends on the services provided by the layer below it, all the way down to the physical network hardware, such as the computer''s network interface card, and the wires that connect the cards together. An easy way to look at this is to compare this model with something we use daily: the telephone. In order for you and I to talk when we''re out of earshot, we need a device like a telephone. (In the ISO/OSI model, this is at the application layer.) The telephones, of course, are useless unless they have the ability to translate the sound into electronic pulses that can be transferred over wire and back again. (These functions are provided in layers below the application layer.) Finally, we get down to the physical connection: both must be plugged into an outlet that is connected to a switch that''s part of the telephone system''s network of switches. If I place a call to you, I pick up the receiver, and dial your number. This number specifies which central office to which to send my request, and then which phone from that central office to ring. Once you answer the phone, we begin talking, and our session has begun. Conceptually, computer networks function exactly the same way. It isn''t important for you to memorize the ISO/OSI Reference Model''s layers; but it''s useful to know that they exist, and that each layer cannot work without the services provided by the layer below it.

Over the last 25 years or so, a number of networks and network protocols have been defined and used. We''re going to look at two of these networks, both of which are ``public'''' networks.
Anyone can connect to either of these networks, or they can use types of networks to connect their own hosts (computers) together, without connecting to the public networks. Each type takes a very different approach to providing network services. UUCP (Unix-to-Unix CoPy) was originally developed to connect Unix (surprise!) hosts together. UUCP has since been ported to many different architectures, including PCs, Macs, Amigas, Apple IIs, VMS hosts, everything else you can name, and even some things you can''t. Additionally, a number of systems have been developed around the same principles as UUCP.

Batch-Oriented Processing.
UUCP and similar systems are batch-oriented systems: everything that they have to do is added to a queue, and then at some specified time, everything in the queue is processed.

Implementation Environment.
UUCP networks are commonly built using dial-up (modem) connections. This doesn''t have to be the case though: UUCP can be used over any sort of connection between two computers, including an Internet connection. Building a UUCP network is a simple matter of configuring two hosts to recognize each other, and know how to get in touch with each other. Adding on to the network is simple; if hosts called A and B have a UUCP network between them, and C would like to join the network, then it must be configured to talk to A and/or B. Naturally, anything that C talks to must be made aware of C''s existence before any connections will work. Now, to connect D to the network, a connection must be established with at least one of the hosts on the network, and so on. Figure 2 shows a sample UUCP network. In a UUCP network, users are identified in the format host!userid. The ``!'''' character (pronounced ``bang'''' in networking circles) is used to separate hosts and users. A bangpath is a string of host(s) and a userid like A!cmcurtin or C!B!A!cmcurtin. If I am a user on host A and you are a user on host E, I might be known as A!cmcurtin and you as E!you. Because there is no direct link between your host (E) and mine (A), in order for us to communicate, we need to do so through a host (or hosts!) that has connectivity to both E and A. In our sample network, C has the connectivity we need. So, to send me a file, or piece of email, you would address it to C!A!cmcurtin. Or, if you feel like taking the long way around, you can address me as C!B!A!cmcurtin. The ``public'''' UUCP network is simply a huge worldwide network of hosts connected to each other.

The public UUCP network has been shrinking in size over the years, with the rise of the availability of inexpensive Internet connections. Additionally, since UUCP connections are typically made hourly, daily, or weekly, there is a fair bit of delay in getting data from one user on a UUCP network to a user on the other end of the network. UUCP isn''t very flexible, as it''s used for simply copying files (which can be netnews, email, documents, etc.) Interactive protocols (that make applications such as the World Wide Web possible) have become much more the norm, and are preferred in most cases. However, there are still many people whose needs for email and netnews are served quite well by UUCP, and its integration into the Internet has greatly reduced the amount of cumbersome addressing that had to be accomplished in times past.

UUCP, like any other application, has security tradeoffs. Some strong points for its security is that it is fairly limited in what it can do, and it''s therefore more difficult to trick into doing something it shouldn''t; it''s been around a long time, and most its bugs have been discovered, analyzed, and fixed; and because UUCP networks are made up of occasional connections to other hosts, it isn''t possible for someone on host E to directly make contact with host B, and take advantage of that connection to do something naughty. On the other hand, UUCP typically works by having a system-wide UUCP user account and password. Any system that has a UUCP connection with another must know the appropriate password for the UUCP or nuucp account. Identifying a host beyond that point has traditionally been little more than a matter of trusting that the host is who it claims to be, and that a connection is allowed at that time. More recently, there has been an additional layer of authentication, whereby both hosts must have the same sequence number, that is a number that is incremented each time a connection is made. Hence, if I run host B, I know the uucp password on host A. If, though, I want to impersonate host C, I''ll need to connect, identify myself as C, hope that I''ve done so at a time that A will allow it, and try to guess the correct sequence number for the session. While this might not be a trivial attack, it isn''t considered very secure.

Networking is of two types
a) peer-to-peer networking and
b) server based networking.
Peer-to-peer networking is fit for less than 10 computers, in this type of networking each node acts as both a server and workstation and each node manages its own resources.
Server based computer networking is fit for more than 10 computers; a server with high speed and large storing space is needed in this kind of networking.

Networking can be classified as
a) Local Area Network (LAN)
b) Metropolitan Area Network (MAN)
c) Wide Area Network (WAN)

Local Area Networking
The smallest network size is a Local Area Network. LAN''s are normally contained in a small group of buildings.

Characteristics of a LAN
· High speed
· Small error counts
· Economical.
Since LAN''s are contained in small areas, high-speed cables can be used for data transmission. Also since the installed media is usually of high quality, few to no errors are generated on the network.

The type of hardware most widely used throughout LANs is what is commonly known as Ethernet. It consists of a single cable with hosts being attached to it through connectors, taps or transceivers. Simple Ethernets are quite inexpensive to install, which, together with a net transfer rate of 10 Megabits per second accounts for much of its popularity. Ethernets come in three flavors, called thick and thin, respectively, and twisted pair. Thin and thick Ethernet each use a coaxial cable, differing in width and the way you may attach a host to this cable. Thin Ethernet uses a T-shaped ``BNC'''' connector, which you insert into the cable, and twist onto a plug on the back of your computer. Thick Ethernet requires that you drill a small hole into the cable, and attach a transceiver using a ``vampire tap''''. One or more hosts can then be connected to the transceiver. Thin and thick Ethernet cable may run for a maximum of 200 and 500-meters, respectively, and are therefore also called 10base-2 and 10base-5. Twisted pair uses a cable made of two copper wires which is also found in ordinary telephone installations, but usually requires additional hardware. It is also known as 10base-T.
Although adding a host to a thick Ethernet is a little hairy, it does not bring down the network. To add a host to a thinnet installation, you have to disrupt network service for at least a few minutes because you have to cut the cable to insert the connector. Most people prefer thin thernet, because it is very cheap: PC cards come for as little as US$50, and cable is in the range of a few cent per meter. However, for large-scale installations, thick Ethernet is more appropriate. For example, the Ethernet at GMU''s Mathematics Department uses thick Ethernet, so traffic will not be disrupted each time a host is added to the network. One of the drawbacks of Ethernet technology is its limited cable length, which precludes any use of it other than for LANs. However, several Ethernet segments may be linked to each other using repeaters, bridges or routers. Repeaters simply copy the signals between two or more segments, so that all segments together will act as if it was one Ethernet. timing requirements, there may not be more than four repeaters any two hosts on the network. Bridges and Routers are more sophisticated. They analyze incoming data and forward it only when the recipient host is not on the local Ethernet. Ethernet works like a bus system, where a host may send packets (or frames) of up to 1500 bytes to another host on the same Ethernet. A host is addressed by a six-byte address hard-coded into the firmware of its Ethernet board. These addresses are usually written as a sequence of two-digit hex numbers separated by colons, as in aa:bb:cc:dd:ee:ff. A frame sent by one station is seen by all attached stations, but only the destination host actually picks it up and processes it. If two stations try to send at the same time, a collision occurs, which is resolved by the two stations aborting the send, and re-attempting it a few moments later.

Metropolitan Area Network
A MAN (Metropolitan Area Network) is geographically a bigger LAN. When two or more LANs located in different parts of a city are connected, MAN is formed. MANs are slower than LANs but usually have few errors on the network. Since special equipments are required to connect different LANs so MAN is expensive.
· Slower than LAN
· Errors generated are a little more
· Expensive

Wide Area Network
The largest network size is a WAN (Wide Area Network), WANs can connect networks across cities, states, countries or even the world. WANs normally use connections that travel all over the country or world. Internet falls in the category of WAN. For this reason they are usually slower than MANs and LANs and more prone to errors. They also require a lot of specialized equipment, so their price is very high.

· Low speed
· Large error counts
· Expensive.

The switching techniques utilize the routing technology for the data transfer. Routing is responsible for searching a path between two computing devices that wish to communicate and for forwarding the data packets on this path. Devices such as bridges, routers and gateways provide this routing function.
Bridges are used to connect two LANs that use identical LAN protocols over a wide area. The bridge acts as an address filter which picks up packets from one LAN that are intended for a destination on another LAN and passes these packets on the network. If the distance between two LANs were large, the user would require two identical bridges at either end of the communication link.

Routers can be used to connect networks that may not be similar. Routers provide connectivity between two LANs or WANs over large geographical distances. All routers participate in a routing protocol to access the network topology, and based on this information routers compute the best route from sender to the receiver.
For large Wide Area Networks spanning thousands of kilometers, the normal practice is to put network routers at suitable location to minimize link costs for leased lines and provide adequate reliability from link failures. Networks and other system are then connected to the nearest router.

Gateways are used to connect two dissimilar LANs. The term gateways and routers are used interchangeably, though there is a subtle difference between the two. A gateway is required to convert data packets from one protocol format to another before forwarding it, as it connects two dissimilar networks.

Types of Wide Area Networks
The essential purpose of any WAN (Wide Area Networks), regardless of the size or technology used, is to link separate locations in order remove data around. A WAN allows these locations to access shared computer resources and provides the essential infrastructure for developing widespread distributed computing systems.

· Public networks
· Private networks

Public networks are those networks, which are installed, and run by the telecommunication authorities and are made available to any organization or individual who subscribe. Examples include Public Switching Telephone Networks (PSTN), Public Switched Data Networks (PSDN) and Integrated Services Digital Networks (ISDN). We would be discussing the main features of these services.

Public Switching Telephone Networks (PSTN)
The features of the PSTN are its low speed, the analog nature of transmission, a restricted bandwidth and its widespread availability. As PSTN is designed for telephones, modems are required when it is used for data communication.
The PSTN is most useful in wide area data communications as an adjunct to other mechanisms. PSTN connections are usually easy to obtain at short notice, and are widely available and cover almost every location where people live and work.

Public Switched Data Networks (PSDN)
The term PSDN covers a number of technologies, although currently it is limited to Public Packet Switched Networks available to the public. The main features of all PSDNs are their high level of reliability and the high quality of the connections provided. They can support both low and high speeds at appropriate costs.

Integrated Services Digital Networks (ISDN)
The ISDN is a networking concept providing for the integration of voice, video and data services using digital transmission media and combining both circuits and packets switching techniques.

The basic technique used in all forms of private WAN is to private (or more usually leased) circuits to link the locations to be served by the network. Between these fixed points the owner of the network has complete freedom to use the circuits in any way they want. They can use the circuits to carry large quantities of data or for high speed transmissions.
Private wide area networks can be built using whatever standard to technology is available. The way private networks have generally been set up has been to specify to the telecommunication company the locations and a quality of circuits required and then use modems, multiplexers and other communications equipment to make the best possible use of circuits.

Virtual Private Networks
Given the ubiquity of the Internet, and the considerable expense in private leased lines, many organizations have been building VPNs (Virtual Private Networks). Traditionally, for an organization to provide connectivity between a main office and a satellite one, an expensive data line had to be leased in order to provide direct connectivity between the two offices. Now, a solution that is often more economical is to provide both offices connectivity to the Internet. Then, using the Internet as the medium, the two offices can communicate. The danger in doing this, of course, is that there is no privacy on this channel, and it''s difficult to provide the other office access to ``internal'''' resources without providing those resources to everyone on the Internet. VPNs provide the ability for two offices to communicate with each other in such a way that it looks like they''re directly connected over a private leased line. The session between them, although going over the Internet, is private (because the link is encrypted), and the link is convenient, because each can see each others'' internal resources without showing them off to the entire world. A number of firewall vendors are including the ability to build VPNs in their offerings, either directly with their base product, or as an add-on. If you have need to connect several offices together, this might very well be the best way to do it.

Communication Protocols
There are several manufacturers of computer hardware and computer software across the globe. For successful data communication these products should be compatible with each other or they should confirm to certain set of rules so that any one can use them. This set of rules is known as communication protocols or communication standard. In other words protocols are technical customs or guidelines that govern the exchange of signal transmission and reception between equipments.
The direction in which information can flow over a transmission path is determined by the properties of both the transmitting and the receiving devices. There are three basic options, VIZ., Simplex mode, Half-duplex mode and Full-duplex mode.

Simplex mode
In simplex mode, the communication channel is used in one direction only. The receiver receives the signals from the transmitting device. A typical use is to gather data from a monitoring device at a regular interval. The simplex mode is rarely used for data communication.

Half-duplex mode
In Half-duplex mode, the communication Channel is used in both directions, but only in one direction at a time. This requires the receiving and transmitting devices to switch between send and receive modes after each transmission.

Full-duplex mode
In Full-duplex mode, the communication channel is used in both directions at the same time. Typical examples of this mode of the transmission is the telephone in which both parties talk to each other at the same time. However, it is costlier because it requires two channels between sending end and receiving end.

The most basic hardware required for communication is the media through which data is transferred. There are several types of media, and the choice of the right media depends on many factors such as cost of transmission media, efficiency of data transmission and the transfer rate.

Some of the transmission media are
· Twisted pair
· Co -axial
· Fibre optic
Twisted pair cables
A twisted pair consists of a pair of insulated conductors that are twisted together. Twisted pair cable is used for communications up to a distance of 1Km and can achieve transfer rates of 1-2 mbps.

Co-axial cable
A co-axial cable consists of a solid conductor running co-axially inside a solid or braided outer annular conductor. The space between the two conductors is filled with a dielectric insulating material. Larger the cable diameter, lower is the transmission loss, and higher transfer speeds can be achieved. A co-axial cable can be used over a distance of about 1Km and can achieve a transfer rate of up to 100 megabytes per second. A co-axial cable of 50 ohms is preferred for use with computers.

Fibre optic cables
A fibre optic cable carries signals in the form of fluctuating light in a glass or plastic core surrounded by a cladding made of a similar material but with a lower refractive index in order to contain the signal inside the core of fibre optic cable. As light waves gave a much wider bandwidth than the electrical signal and are immune from electromagnetic interferences, this leads to high data transfer rates of about 1000Mbps and can be used for long and medium distance transmission links. The transmission losses are negligible.

Radio, Microwave and satellite
Radio, Microwave and Satellite channels use electromagnetic propagation in open space. The advantage of these channels lie in their capability to cover large geographical areas and being inexpensive than the wired installation. Satellite links use microwave frequencies in the order of 4-12 Ghz with the satellite as a repeater. They can achieve data transfer rates of about 1000Mbps. However, because of earth curvature, microwave repeaters are located at 50 kms apart making it expensive.

The way computers are connected to each other and to various resources is known as computer topology. There exit good number of topologies these days, some of them are: -
· Bus topology.
· Star topology.
· Ring topology.
· Mesh topology.

Bus topology
This kind of a topology is said to exist in a network where all the computers are connected serially. This kind of topology is very popular these days. Bus topology is feasible for a small network of 10 to 50 computers only.

Star topology
The topology where a hub is required and centrally located is known as star topology. All the computers are directly connected to the hub, when a computer goes down in this kind of a network; the network still goes on functioning smoothly.

Ring topology
When the computers are connected in a way where there is no ending or starting point of a network is known as Ring topology, as the name suggest. The entire network goes down in case any computer goes down.

Mesh topology
This is the topology where each computer is interconnected to each and every computer in the network. The speed of the communication in Mesh topology is very high compared to all of the other topologies. This kind of topology is very complicated.

The Internet is the world''s largest network of networks . When you want to access the resources offered by the Internet, you don''t really connect to the Internet; you connect to a network that is eventually connected to the Internet backbone , a network of extremely fast (and incredibly overloaded!) network components. This is an important point: the Internet is a network of networks -- not a network of hosts. A simple network can be constructed using the same protocols and such that the Internet uses without actually connecting it to anything else. Such a basic network is shown in Figure 2.

A Simple Local Area Network

I might be allowed to put one of my hosts on one of my employer''s networks. We have a number of networks, which are all connected together on a backbone , that is a network of our networks. Our backbone is then connected to other networks, one of which is to an Internet Service Provider (ISP) whose backbone is connected to other networks, one of which is the Internet backbone. If you have a connection ``to the Internet'''' through a local ISP, you are actually connecting your computer to one of their networks, which is connected to another, and so on. To use a service from my host, such as a web server, you would tell your web browser to connect to my host. Underlying services and protocols would send packets (small data grams) with your query to your ISP''s network, and then a network they''re connected to, and so on, until it found a path to my employer''s backbone, and to the exact network my host is on. My host would then respond appropriately, and the same would happen in reverse: packets would traverse all of the connections until they found their way back to your computer, and you were looking at my web page. In Figure 4, the network shown in Figure 3 is designated ``LAN 1'''' and shown in the bottom-right of the picture. This shows how the hosts on that network are provided connectivity to other hosts on the same LAN, within the same company, outside of the company, but in the same ISP cloud , and then from another ISP somewhere on the Internet. The Internet is made up of a wide variety of hosts, from supercomputers to personal computers, including every imaginable type of hardware and software. How do all of these computers understand each other and work together?

TCP/IP (Transport Control Protocol/Internet Protocol) is the ``language'''' of the Internet. Anything that can learn to ``speak TCP/IP'''' can play on the Internet. This is functionality that occurs at the Network (IP) and Transport (TCP) layers in the ISO/OSI Reference Model. Consequently, a host that has TCP/IP functionality (such as Unix, OS/2, MacOS, or Windows NT) can easily support applications (such as Netscape''s Navigator) that uses the network.

Open Design
One of the most important features of TCP/IP isn''t a technological one: The protocol is an ``open'''' protocol, and anyone who wishes to implement it may do so freely. Engineers and scientists from all over the world participate in the IETF (Internet Engineering Task Force) working groups that design the protocols that make the Internet work. Their time is typically donated by their companies, and the result is work that benefits everyone.

As noted, IP is a ``network layer'''' protocol. This is the layer that allows the hosts to actually``talk'''' to each other. Such things as carrying datagrams, mapping the Internet address (such as10.2.3.4) to a physical network address (such as 08:00:69:0a:ca:8f), and routing, which takes care of making sure that all of the devices that have Internet connectivity can find the way to each other.

Understanding IP
IP has a number of very important features which make it an extremely robust and flexible protocol. For our purposes, though, we''re going to focus on the security of IP, or more specifically, the lack thereof.

Attacks Against IP
A number of attacks against IP are possible. Typically, these exploit the fact that IP does not perform a robust mechanism for authentication , which is proving that a packet came from where it claims it did. A packet simply claims to originate from a given address, and there isn''t a way to be sure that the host that sent the packet is telling the truth. This isn''t necessarily a weakness, per se , but it is an important point, because it means that the facility of host authentication has to be provided at a higher layer on the ISO/OSI Reference Model. Today, applications that require strong host authentication (such as cryptographic applications) do this at the application layer.

IP Spoofing.
This is where one host claims to have the IP address of another. Since many systems (such asrouter access control lists) define which packets may and which packets may not pass based on the sender''s IP address, this is a useful technique to an attacker: he can send packets to a host, perhaps causing it to take some sort of action. Additionally, some applications allow login based on the IP address of the person making the request (such as the Berkeley r-commands )[2]. These are both good examples how trusting untrustable layers can provide security that is -- at best -- weak.

IP Session Hijacking.
This is a relatively sophisticated attack, first described by Steve Bellovin [3]. This is very dangerous, however, because there are now toolkits available in the underground community that allow otherwise unskilled bad-guy-wannabes to perpetrate this attack. IP Session Hijacking is an attack whereby a user''s session is taken over, being in the control of the attacker. If the user was in the middle of email, the attacker is looking at the email, and then can execute any commands he wishes as the attacked user. The attacked user simply sees his session dropped, and may simply login again, perhaps not even noticing that the attacker is still logged in and doing things.
For the description of the attack, let''s return to our large network of networks. In this attack, a user on host A is carrying on a session with host G. Perhaps this is a telnet session, where the user is reading his email, or using a Unix shell account from home. Somewhere in the network between A and B sits host H which is run by a naughty person. The naughty person on host H watches the traffic between A and G, and runs a tool which starts to impersonate A to G, and at the same time tells A to shut up, perhaps trying to convince it that G is no longer on the net (which might happen in the event of a crash, or major network outage). After a few seconds of this, if the attack is successful, naughty person has ``hijacked'''' the session of our user. Anything that the user can do legitimately can now be done by the attacker, illegitimately. As far as G knows, nothing has happened. This can be solved by replacing standard telnet-type applications with encrypted versions of the same thing. In this case, the attacker can still take over the session, but he''ll see only ``gibberish'''' because the session is encrypted. The attacker will not have the needed cryptographic key(s) to decrypt the data stream from G, and will, therefore, be unable to do anything with the session.

TCP is a transport-layer protocol. It needs to sit on top of a network-layer protocol, and was designed to ride atop IP. (Just as IP was designed to carry, among other things, TCP packets.) Because TCP and IP were designed together and wherever you have one, you typically have the other, the entire suite of Internet protocols are known collectively as ``TCP/IP.'''' TCP itself has a number of important features that we''ll cover briefly.

Guaranteed Packet Delivery
Probably the most important is guaranteed packet delivery. Host A sending packets to host B expects to get acknowledgments back for each packet. If B does not send an acknowledgment within a specified amount of time, A will resend the packet. Applications on host B will expect a data stream from a TCP session to be complete, and in order. As noted, if a packet is missing, it will be resent by A, and if packets arrive out of order, B will arrange them in proper order before passing the data to the requesting application. This is suited well toward a number of applications, such as a telnet session. A user wants to be sure every keystroke is received by the remote host, and that it gets every packet sent back, even if this means occasional slight delays in responsiveness while a lost packet is resent, or while out-of-order packets are rearranged. It is not suited well toward other applications, such as streaming audio or video, however. In these, it doesn''t really matter if a packet is lost (a lost packet in a stream of 100 won''t be distinguishable) but it does matter if they arrive late (i.e., because of a host resending a packet presumed lost), since the data stream will be paused while the lost packet is being resent. Once the lost packet is received, it will be put in the proper slot in the data stream, and then passed up to the application.

UDP (User Datagram Protocol) is a simple transport-layer protocol. It does not provide the same features as TCP, and is thus considered ``unreliable.'''' Again, although this is unsuitable for some applications, it does have much more applicability in other applications than the more reliable and robust TCP.

Lower Overhead than TCP
One of the things that makes UDP nice is its simplicity. Because it doesn''t need to keep track of the sequence of packets, whether they ever made it to their destination, etc., it has lower overhead than TCP. This is another reason why it''s more suited to streaming-data applications: there''s less screwing around that needs to be done with making sure all the packets are there, in the right order, and that sort of thing.

Now, we''ve covered enough background information on networking that we can actually get into the security aspects of all of this. First of all, we''ll get into the types of threats there are against networked computers, and then some things that can be done to protect yourself against various threats.

DoS (Denial-of-Service) attacks are probably the nastiest, and most difficult to address. These are the nastiest, because they''re very easy to launch, difficult (sometimes impossible) to track, and it isn''t easy to refuse the requests of the attacker, without also refusing legitimate requests for service. The premise of a DoS attack is simple: send more requests to the machine than it can handle. There are toolkits available in the underground community that make this a simple matter of running a program and telling it which host to blast with requests. The attacker''s program simply makes a connection on some service port, perhaps forging the packet''s header information that says where the packet came from, and then dropping the connection. If the host is able to answer 20 requests per second, and the attacker is sending 50 per second, obviously the host will be unable to service all of the attacker''s requests, much less any legitimate requests (hits on the web site running there, for example). Such attacks were fairly common in late 1996 and early 1997, but are now becoming less popular. Some things that can be done to reduce the risk of being stung by a denial of service attack include Not running your visible-to-the-world servers at a level too close to capacity Using packet filtering to prevent obviously forged packets from entering into your network address space. Obviously forged packets would include those that claim to come from your own hosts, addresses reserved for private networks as defined in RFC 1918 [4], and the loop back network ( Keeping up-to-date on security-related patches for your hosts'' operating systems.

Unauthorized Access
"Unauthorized access'''' is a very high-level term that can refer to a number of different sorts of attacks. The goal of these attacks is to access some resource that your machine should not provide the attacker. For example, a host might be a web server, and should provide anyone with requested web pages. However, that host should not provide command shell access without being sure that the person making such a request is someone who should get it, such as a local administrator.

Executing Commands Illicitly
It''s obviously undesirable for an unknown and untrusted person to be able to execute commands on your server machines. There are two main classifications of the severity of this problem: normal user access, and administrator access. A normal user can do a number of things on a system (such as read files, mail them to other people, etc.) that an attacker should not be able to do. This might, then, be all the access that an attacker needs. On the other hand, an attacker might wish to make configuration changes to a host (perhaps changing its IP address, putting a start-up script in place to cause the machine to shut down every time it''s started, or something similar). In this case, the attacker will need to gain administrator privileges on the host.

Confidentiality Breaches

We need to examine the threat model: what is it that you''re trying to protect yourself against? There is certain information that could be quite damaging if it fell into the hands of a competitor, an enemy, or the public. In these cases, it''s possible that compromise of a normal user''s account on the machine can be enough to cause damage (perhaps in the form of PR, or obtaining information that can be used against the company, etc.) While many of the perpetrators of these sorts of break-ins are merely thrill-seekers interested in nothing more than to see a shell prompt for your computer on their screen, there are those who are more malicious, as we''ll consider next. (Additionally, keep in mind that it''s possible that someone who is normally interested in nothing more than the thrill could be persuaded to do more: perhaps an unscrupulous competitor is willing to hire such a person to hurt you.)

Destructive Behavior
Among the destructive sorts of break-ins and attacks, there are two major categories.

Data Diddling.
The data diddler is likely the worst sort, since the fact of a break-in might not be immediately obvious. Perhaps he''s toying with the numbers in your spreadsheets, or changing the dates in your projections and plans. Maybe he''s changing the account numbers for the auto-deposit of certain paychecks. In any case, rare is the case when you''ll come in to work one day, and simply know that something is wrong. An accounting procedure might turn up a discrepancy in the books three or four months after the fact. Trying to track the problem down will certainly be difficult, and once that problem is discovered, how can any of your numbers from that time period be trusted? How far back do you have to go before you think that your data is safe?

Data Destruction.
Some of those perpetrate attacks are simply twisted jerks who like to delete things. In these cases, the impact on your computing capability -- and consequently your business -- can be nothing less than if a fire or other disaster caused your computing equipment to be completely destroyed.Where Do They Come From?
How, though, does an attacker gain access to your equipment? Through any connection that you have to the outside world. This includes Internet connections, dial-up modems, and even physical access. (How do you know that one of the temps that you''ve brought in to help with the data entry isn''t really a system cracker looking for passwords, data phone numbers, vulnerabilities and anything else that can get him access to your equipment?). In order to be able to adequately address security, all possible avenues of entry must be identified and evaluated. The security of that entry point must be consistent with your stated policy on acceptable risk levels.

Network security is a complicated subject, historically only tackled by well-trained and experienced experts. However, as more and more people become ``wired'''', an increasing number of people need to understand the basics of security in a networked world. This document was written with the basic computer user and information systems manager in mind, explaining the concepts needed to read through the hype in the marketplace and understand risks and how to deal with them. Some history of networking is included, as well as an introduction to TCP/IP and internetworking. We go on to consider risk management, network threats, firewalls, and more special-purpose secure networking devices. This is not intended to be a ``frequently asked questions'''' reference, nor is it a ``hands-on'''' document describing how to accomplish specific functionality. It is hoped that the reader will have a wider perspective on security in general, and better understand how to reduce and manage risk personally, at home, and in the workplace.
Risk Management: The Game of Security
It''s very important to understand that in security, one simply cannot say ``what''s the best firewall?'''' There are two extremes: absolute security and absolute access. The closest we can get to an absolutely secure machine is one unplugged from the network, power supply, locked in a safe, and thrown at the bottom of the ocean. Unfortunately, it isn''t terribly useful in this state. A machine with absolute access is extremely convenient to use: it''s simply there, and will do whatever you tell it, without questions, authorization, passwords, or any other mechanism. Unfortunately, this isn''t terribly practical, either: the Internet is a bad neighborhood now, and it isn''t long before some bonehead will tell the computer to do something like self-destruct, after which, it isn''t terribly useful to you. This is no different from our daily lives. We constantly make decisions about what risks we''re willing to accept. When we get in a car and drive to work, there''s a certain risk that we''re taking. It''s possible that something completely out of control will cause us to become part of an accident on the highway. When we get on an airplane, we''re accepting the level of risk involved as the price of convenience. However, most people have a mental picture of what an acceptable risk is, and won''t go beyond that in most circumstances. If I happen to be upstairs at home, and want to leave for work, I''m not going to jump out the window. Yes, it would be more convenient, but the risk of injury outweighs the advantage of convenience. Every organization needs to decide for itself where between the two extremes of total security and total access they need to be. A policy needs to articulate this, and then define how that will be enforced with practices and such. Everything that is done in the name of security, then, must enforce that policy uniformly.

As we''ve seen in our discussion of the Internet and similar networks, connecting an organization to the Internet provides a two-way flow of traffic. This is clearly undesirable in many organizations, as proprietary information is often displayed freely within a corporate intranet (that is, a TCP/IP network, modeled after the Internet that only works within the organization). In order to provide some level of separation between an organization''s intranet and the Internet, firewalls have been employed. A firewall is simply a group of components that collectively form a barrier between two networks. A number of terms specific to firewalls and networking are going to be used throughout this section, so let''s introduce them all together.

Types of Firewalls
There are three basic types of firewalls, and we''ll consider each of them.

Application Gateways
The first firewalls were application gateways, and are sometimes known as proxy gateways. These are made up of bastion hosts that run special software to act as a proxy server. This software runs at the Application Layer of our old friend the ISO/OSI Reference Model, hence the name. Clients behind the firewall must be proxitized (that is, must know how to use the proxy, and be configured to do so) in order to use Internet services. Traditionally, these have been the most secure, because they don''t allow anything to pass by default, but need to have the programs written and turned on in order to begin passing traffic. These are also typically the slowest, because more processes need to be started in order to have a request serviced. Figure 5 shows a application gateway.

Packet Filtering
Packet filtering is a technique whereby routers have ACLs (Access Control Lists) turned on. By default, a router will pass all traffic sent it, and will do so without any sort of restrictions. Employing ACLs is a method for enforcing your security policy with regard to what sorts of access you allow the outside world to have to your internal network, and vice versa. There is less overhead in packet filtering than with an application gateway, because the feature of access control is performed at a lower ISO/OSI layer (typically, the transport or session layer). Due to the lower overhead and the fact that packet filtering is done with routers, which are specialized computers optimized for tasks related to networking, a packet filtering gateway is often much faster than its application layer cousins. Figure 6 shows a packet filtering gateway. Because we''re working at a lower level, supporting new applications either comes automatically, or is a simple matter of allowing a specific packet type to pass through the gateway. (Not that the possibility of something automatically makes it a good idea; opening things up this way might very well compromise your level of security below what your policy allows.). There are problems with this method, though. Remember, TCP/IP has absolutely no means of guaranteeing that the source address is really what it claims to be. As a result, we have to use layers of packet filters in order to localize the traffic. We can''t get all the way down to the actual host, but with two layers of packet filters, we can differentiate between a packet that came from the Internet and one that came from our internal network. We can identify which network the packet came from with certainty, but we can''t get more specific than that.

Hybrid Systems

In an attempt to marry the security of the application layer gateways with the flexibility and speed of packet filtering, some vendors have created systems that use the principles of both.
In some of these systems, new connections must be authenticated and approved at the application layer. Once this has been done, the remainder of the connection is passed down to the session layer, where packet filters watch the connection to ensure that only packets that are part of an ongoing (already authenticated and approved) conversation are being passed.
Other possibilities include using both packet filtering and application layer proxies. The benefits here include providing a measure of protection against your machines that provide services to the Internet (such as a public web server), as well as provide the security of an application layer gateway to the internal network. Additionally, using this method, an attacker, in order to get to services on the internal network, will have to break through the access router, the bastion host, and the choke router.

So, what''s best for me?
Lots of options are available, and it makes sense to spend some time with an expert, either in-house, or an experienced consultant who can take the time to understand your organization''s security policy, and can design and build a firewall architecture that best implements that policy. Other issues like services required, convenience, and scalability might factor in to the final design.

Some Words of Caution
The business of building firewalls is in the process of becoming a commodity market. Along with commodity markets come lots of folks who are looking for a way to make a buck without necessarily knowing what they''re doing. Additionally, vendors compete with each other to try and claim the greatest security, the easiest to administer, and the least visible to end users. In order to try to quantify the potential security of firewalls, some organizations have taken to firewall certifications. The certification of a firewall means nothing more than the fact that it can be configured in such a way that it can pass a series of tests. Similarly, claims about meeting or exceeding U.S. Department of Defense ``Orange Book'''' standards, C-2, B-1, and such all simply mean that an organization was able to configure a machine to pass a series of tests. This doesn''t mean that it was loaded with the vendor''s software at the time, or that the machine was even usable. In fact, one vendor has been claiming their operating system is ``C-2 Certified'''' didn''t make mention of the fact that their operating system only passed the C-2 tests without being connected to any sort of network devices. Such gauges as market share, certification, and the like are no guarantees of security or quality. Taking a little bit of time to talk to some knowledgeable folks can go a long way in providing you a comfortable level of security between your private network and the big, bad Internet. Additionally, it''s important to note that many consultants these days have become much less the advocate of their clients, and more of an extension of the vendor. Ask any consultants you talk to about their vendor affiliations, certifications, and whatnot. Ask what difference it makes to them whether you choose one product over another, and vice versa. And then ask yourself if a consultant who is certified in technology XYZ is going to provide you with competing technology ABC, even if ABC best fits your needs.

Single Points of Failure

Many ``firewalls'''' are sold as a single component: a bastion host, or some other black box that you plug your networks into and get a warm-fuzzy, feeling safe and secure. The term ``firewall'''' refers to a number of components that collectively provide the security of the system. Any time there is only one component paying attention to what''s going on between the internal and external networks, an attacker has only one thing to break (or fool!) in order to gain complete access to your internal networks.

Secure Network Devices
It''s important to remember that the firewall only one entry point to your network. Modems, if you allow them to answer incoming calls, can provide an easy means for an attacker to sneak around (rather than through ) your front door (or, firewall). Just as castles weren''t built with moats only in the front, your network needs to be protected at all of its entry points.

Secure Modems; Dial-Back Systems

If modem access is to be provided, this should be guarded carefully. The terminal server , or network device that provides dial-up access to your network needs to be actively administered, and its logs need to be examined for strange behavior. Its password need to be strong -- not ones that can be guessed. Accounts that aren''t actively used should be disabled. In short, it''s the easiest way to get into your network from remote: guard it carefully.
There are some remote access systems that have the feature of a two-part procedure to establish a connection. The first part is the remote user dialing into the system, and providing the correct user id and password. The system will then drop the connection, and call the authenticated user back at a known telephone number. Once the remote user''s system answers that call, the connection is established, and the user is on the network. This works well for folks working at home, but can be problematic for users wishing to dial in from hotel rooms and such when on business trips. Other possibilities include one-time password schemes, where the user enters his userid, and is presented with a ``challenge,'''' a string of between six and eight numbers. He types this challenge into a small device that he carries with him that looks like a calculator. He then presses enter, and a ``response'''' is displayed on the LCD screen. The user types the response, and if all is correct, he login will proceed. These are useful devices for solving the problem of good passwords, without requiring dial-back access. However, these have their own problems, as they require the user to carry them, and they must be tracked, much like building and office keys. No doubt many other schemes exist. Take a look at your options, and find out how what the vendors have to offer will help you enforce your security policy effectively.

Crypto-Capable Routers
A feature that is being built into some routers is the ability to session encryption between specified routers. Because traffic traveling across the Internet can be seen by people in the middle who have the resources (and time) to snoop around, these are advantageous for providing connectivity between two sites, such that there can be secure routes.

Don''t put data where it doesn''t need to be
Although this should go without saying, this doesn''t occur to lots of folks. As a result, information that doesn''t need to be accessible from the outside world sometimes is, and this can needlessly increase the severity of a break-in dramatically.

Avoid systems with single points of failure

Any security system that can be broken by breaking through any one component isn''t really very strong. In security, a degree of redundancy is good, and can help you protect your organization from a minor security breach becoming a catastrophe. Stay current with relevant operating system patches Be sure that someone who knows what you''ve got is watching the vendors'' security advisories. Exploiting old bugs is still one of the most common (and most effective!) means of breaking into systems.
Watch for relevant security advisories

In addition to watching what the vendors are saying, keep a close watch on groups like CERT and CIAC. Make sure that at least one person (preferably more) is subscribed to these mailing lists Have someone on staff be familiar with security practices. Having at least one person who is charged with keeping abreast of security developments is a good idea. This need not be a technical wizard, but could be someone who is simply able to read advisories issued by various incident response teams, and keep track of various problems that arise. Such a person would then be a wise one to consult with on security related issues, as he''ll be the one who knows if web server software version such-and-such has any known problems, etc. This person should also know the ``dos'''' and ``don''ts'''' of security, from reading such things as the ``Site Security Handbook.''''[5]

Bastion host.
A general-purpose computer used to control access between the internal (private) network (intranet) and the Internet (or any other untrusted network). Typically, these are hosts running a flavor of the Unix operating system that has been customized in order to reduce its functionality to only what is necessary in order to support its functions. Many of the general-purpose features have been turned off, and in many cases, completely removed, in order to improve the security of the machine.

A special purpose computer for connecting networks together. Routers also handle certain functions, such as routing , or managing the traffic on the networks they connect.

Access Control List (ACL)
Many routers now have the ability to selectively perform their duties, based on a number of facts about a packet that comes to it. This includes things like origination address, destination address, destination service port, and so on. These can be employed to limit the sorts of packets that are allowed to come in and go out of a given network.
Demilitarized Zone (DMZ).
The DMZ is a critical part of a firewall: it is a network that is neither part of the untrusted network, nor part of the trusted network. But, this is a network that connects the untrusted to the trusted. The importance of a DMZ is tremendous: someone who breaks into your network from the Internet should have to get through several layers in order to successfully do so. Those layers are provided by various components within the DMZ.
Proxy. This is the process of having one host act in behalf of another. A host that has the ability to fetch documents from the Internet might be configured as a proxy server , and host on the intranet might be configured to be proxy clients . In this situation, when a host on the intranet wishes to fetch the <> web page, for example, the browser will make a connection to the proxy server, and request the given URL. The proxy server will fetch the document, and return the result to the client. In this way, all hosts on the intranet are able to access resources on the Internet without having the ability to direct talk to the Internet.


Security is a very difficult topic. Everyone has a different idea of what ``security'''' is, and what levels of risk are acceptable. The key for building a secure network is to define what security means to your organization. Once that has been defined, everything that goes on with the network can be evaluated with respect to that policy. Projects and systems can then be broken down into their components, and it becomes much simpler to decide whether what is proposed will conflict with your security policies and practices. Many people pay great amounts of lip service to security, but do not want to be bothered with it when it gets in their way. It''s important to build systems and networks in such a way that the user is not constantly reminded of the security system around him. Users who find security policies and systems too restrictive will find ways around them. It''s important to get their feedback to understand what can be improved, and it''s important to let them know why what''s been done has been, the sorts of risks that are deemed unacceptable, and what has been done to minimize the organization''s exposure to them. Security is everybody''s business, and only with everyone''s cooperation, an intelligent policy, and consistent practices, will it be achievable.
Network security is a complicated subject, historically only tackled by well-trained and experienced experts. However, as more and more people become ``wired'''', an increasing number of people need to understand the basics of security in a networked world. This document was written with the basic computer user and information systems manager in mind, explaining the concepts needed to read through the hype in the marketplace and understand risks and how to deal with them. Some history of networking is included, as well as an introduction to TCP/IP and internetworking. We go on to consider risk management, network threats, firewalls, and more special-purpose secure networking devices. This is not intended to be a ``frequently asked questions'''' reference, nor is it a ``hands-on'''' document describing how to accomplish specific functionality. It is hoped that the reader will have a wider perspective on security in general, and better understand how to reduce and manage risk personally, at home, and in the workplace.

A browser is a program that retrieves the web page from the web server. There are a lot of Browsers available these days, some kinds of Browsers are: - Netscape Navigator, Microsoft Internet Explorer, Lynx, Apple Cyber dog. Microsoft Internet Explorer and Netscape Navigator are widely used browsers these days.

A modem (modulator /demodulator) is an electronic device that is used to connect a computer or the entire network to the Internet. Modem is the basic requirement for working on the Internet. A Modem uses telephone lines for the transmission of data. Modems are classified on the basis of there transmission speeds.

Which Modem is Best for You
The industry has done more to confuse users regarding modems in the last two years than any other component of their computer. Essentially, there are so many differing models, protocols and features that for most of us, getting the right modem or the best modem - is a shot in the dark. Here are the issues that have caused the confusion and some ideas as to how to best handle them:

Modems are defined by their "speed" of communication with other modems. The term baud essentially means the "number of characters per second" that two modems can exchange with each other. So the faster the baud rate, the faster your modem communicates with other modems at the MLS, on the Web or with a fax machine. Baud rates vary from very slow to very fast. However, there are some practical limits - and some governmental regulations - that affect how your fast your modem can operate. In the market today, you can find the following modems (we have given them some "plain language" ratings for you as well):
14,400 baud
28,800 baud
33,600 baud
56,000 baud
Cable Modem
Slow modem over 4 years old
Average modem in most computers today, Average modem sold in new models today "Fastest" phone-line modem in the market today Requires special modem and phone line from phone company Requires special modem and cable line from cable company.

How do you choose the best modem?
Let''s look at some of the criteria and issues with each of these modems:

14,400 baud: If you have one of these modems, you need to replace it quickly. It is far too slow to receive Web information efficiently (unless you only search for text-based documents and use plain-text email) and will frustrate you when you need to locate information quickly. Under no circumstances should you purchase a 14,400 baud modem today.

28,800 baud: A fast modem which is probably found in most Pentium-based computers and laptops in homes and offices today. It provides twice the speed of the 14,400 baud modem and surfs the Web and most MLS systems quickly enough to provide efficient use to most business professionals. The 28,800 is considered the "minimum" modem you should buy if you are buying a new computer today.

33,600 baud: This "slight" upgrade to the 28,800 baud modem offers a marginal improvement at a marginal price increase. Many MLS services do not offer 33.6 speeds, so the upgrade may be unnecessary for real estate professionals. Additionally, users of the 28.8 modem should not rush out to replace their modems with the 33.6 simply because its speed improvements are so slight that they do not justify the replacement costs of a new modem.

56k baud: Now here''s a really odd modem... To start with, it really does not communicate at 56k - since the FCC has limited data telecommunication speeds to 53k by law! And when it does communicate at 56k, it is only "towards" you that the speed increase is achieved. So if you are browsing, the speed at which web pages appear on your screen is enhanced. Of course, if you are sending out lots of email or forwarding listing sheets or other communications to prospects, the speed remains at 28,800 (or so) - so the improvement in speed is marginal over the standard 28.8 modem! But wait - it get''s worse (or funnier!) - The 56k modem only works under ideal conditions of telephone line clarity and phone company equipment. Most phone lines are too "noisy" for 56k modems to sustain the performance - so most users usually achieve only 44k as an average speed. And if the phone company in your area does not have a modern phone switch (which is a special type of routing computer at the nearest branch to your location) then the 56k will never pass through between your modem and the other location (such as a web site) - and that depends upon whether or not our Internet Service Provider even supports 56k speeds - which many do not - so your new modem may only get 33.6 speeds from your provider even with ideal line conditions! Finally (yes, there is more!), there have been two types of 56k modem standards! Remember the initial days of VCR machines - with Beta and VHS? Well, the same thing happened to modems - the x2 and k56Flex competing standards prevented many modems from talking to each other because of proprietary standards! And ISP''s who chose one standard might have caused problems with users who had purchased modems with the other standard! To top it off, when the industry finally selected a standard this year, they chose an entirely different standard (called v90) than the competing two already in the market - which means we have to go through a whole round of upgrades and changes again this fall! And some of the existing 56k modems are not upgradable to the new standard!

So what should you do???
Here''s my best pick: Go with a v.90 56k modem! Why? Well, 56k modems are standard on most models today - even if they rarely operate at top speeds - and maybe the phone companies and ISPs will catch up soon. Going to a slower modem like the 33.6 is not worth the money savings because IF the systems get better, you will have to replace it anyway! Other things to remember...Keep you eye out for another modem trap in the industry called the Windows modem. This modem only works with Windows applications and may cause you problems if you run DOS programs (like older MLS software). And since it is only about $20 cheaper than a "real" modem, then I recommend you go for the full-featured modems!

How do computers recognize different computers on the other networks
Computers communicate with each other the way people do i.e. by using addresses; every computer has got its own IP address, MAC address and computer name. When a computer needs to send some data to the other computer, the data is broken into packets and IP address added as a header and sent to its destination. This process is down by TCP/IP. When the computer, which has to receive data, is in another zone or in the farthest area, then firstly the gateway address i.e., Default gateway needs to be known to the sender computer. No computer can send the data to any computer unless and until the address of the receiver computer is known.

Who governs the Internet
Internet has no president or chief operating officer and is governed by a number of authorities. The ultimate authority of Internet rests with Internet society (ISOC) a voluntary membership organization. The purpose of this organization is to promote global interchange of information. Internet Architecture Board (IAB). The IAB also sets standard and gives Internet addresses. Internet Engineering Task Force discusses the technical and operational problems in Internet.

What can one do on Internet
· Publish research information.
· Use it for teaching, e.g. teaching C++ on Internet.
· Use it with ISDN for multimedia conferencing.
· Refer to the pictures of an art gallery.
· Can have an electronic copy of classics e.g. Alice in wonderland.
· Can have an electronic copy of journals and magazines.
· Publicity and advertisement.
· Can be in touch with people worldwide.
· Can search for a job.
· Can watch movies.
· Send mails across the Oceans within no time.

One of the very useful things about the Internet is that it allows you almost instantly exchange of the electronic message across the world. E-mail is a popular way of communication on the electronic frontier, you can e-mail to your friend or a researcher or anybody for getting a copy of a selected paper. E-mail is mainly used for sending electronic piece of text.
Another exciting aspect about the E-mail is that you can find groups of people who share your interest, whether you are inclined towards research, games or astronomy.
To gain access to E-mail facility, there exist a lot of programs that offer free E-mail facility. Some of these programs are: Yahoo and Hotmail. You can access the Yahoo at and Hotmail at <>.

How to create your own E-mail account
There are a lot of programs available on the net, which provides us the facility of accessing E-mail and creating our own E-mail accounts. Besides some commercial programs there exist a good number of those programs, which offer free E-mail facility, e.g. Yahoo and Hotmail.

Usenet and newsgroups
In Internet there exists another ways to meet people and share information. One such way is through Usenet Newsgroups. The Usenet can be considered as another global network of computers. However, Usenet does not operate interactively like the Internet, instead Usenet machine store the massages sent by users. Unlike mail from mailing lists, the news articles do not automatically fill your electronic mail box. For accessing the information on the news net, one needs a special type of program called a news reader, this program help in retrieving only the news you want from Usenet storage site and display it your terminal.

FAQ (Frequently Asked Questions)
A great resource offered by Usenet is the FAQ i.e. the list of frequently asked questions and responses for them for a particular news group. FAQ are an excellent sharing place for learning about a topic. Some FAQ''s go to a distance such that they provide annotated bibliography. Before asking any question one should ensure that the question he is going to ask has not already been answered and available on the document. You must also answer the questions that other ask. It is only the people interaction with each other that has made Usenet the amazing information resource that it is.

Transferring files with ftp
There exists a standard tool on Internet for transferring copies of files. This program is called ftp. ftp can be used to copy any file from one Internet host to other. However you need account name and the password on a host. If you do not have an account on a remote Internet host, then ftp recognizes a special account name called anonymous. ftp can down load and up load files as per the requirements of the user. One cannot open the file when connected through ftp.

One aspect of the interconnectedness of the net is that you can log into a remote computer directly from your own computer. With Telnet, you can log into any computer or network for which you have a password, as well as thousands of public sites where passwords are not required. Many University libraries now make their catalogs available by Telnet, as do countless other repositories of useful information.

How to search for a particular thing
The easiest way for searching your interesting page or information is Yahoo.Com. This service provides you with the information of most of the pages. You can look for a certain information either in entire Yahoo directory or can search from the current category of the Yahoo.
You can also use keywords to search a particular topic i.e. you can type in few words related to the topic you are looking for and click search.

What is a search engine:
A search engine is a website designed to perform searches of a Net. The search engine unlike Yahoo searches the entire net. The most popular search engines are Lycos and AltaVista, available at and

Some times when you roam around the web you come across some interesting destinations and want to share them so that you can access these destinations in feature with out any hindrance. You can do so by making bookmarks (also called favorites or favorite place in some Browsers). Bookmark is like an address book of your friends.

Sending Mail to multiple recipients:
Some times you may want to send a message to more then one recipient. This can usually be done in one of the several ways. Most programs allow you to list multiple recipients in the To : line. Cc: line in an e-mail message is for people who you want to receive a copy of the message, but who are not the primary recipient.

Forwarding the message:
If some one sends you mail and you would like to send a copy of it to some one else, with most mail programs, you can select a forward command, but please take care, never send mail to a third party without the permission of the original sender.

Data communication:
The first step towards understanding communication is to look at computer data at its must base level. As all of us know computers and computer devices manage, store and exchange data using electronic pulses or digital signals, that come in two varieties, the binary digit ''0'' indicates the absence (OFF) and ''1'' indicates (ON) the presence of electric current. A series of ON''s and OFF''s in various combinations can be sent on the communication channels to represent any character.

Communication speed:
The speed at which two computers exchange or transmit data is called communication rate or transmission speed. The unit of measurement of the speed is measured using bps or baud. Normal PC based communication station transferred using 300 to 9600 bps, where as mainframe computers uses 19,200 baud or more.
Analog and Digital transmission:
One of the fundamental concepts in data transmission is to understand the difference between the Analog and Digital signals.
· An Analog signal is one that is continuous with respect to time, and may take on any value with in a given range of values. Human voice, video and music when converted to electrical signals using suitable devices produce analog signals.
· A digital signal may take on only a discrete set of values within a given range. Most computers and computer related equipments are digital.

Chatting is a form of immediate communication. With a chat program, you join conversations, and then whatever you type appears on the screen of everyone else who is participating in or listening in on the conversation. Chatting is not unlike talking over the telephone with Teletype machines. Unlike e- mail, chatting takes place ''live'' in what Internet folks call real time, meaning both people participate at the same time. There exist a lot of chatting programs e.g., irc, ircle, miRC, Netcruiser and Netscape chat, miRC is a windows program.
There are various ways of a development of a web page. But Netscape Composer provides the simplest way.
The various steps involved in the development of the web page are:
· Execute the Netscape Communicator.
· Type the contents of the front page.
· Save the file with some file name.
· Run the file through Netscape Communicator.
A number of similar pages can be developed following the same way. Inserting the links from insert option of the main menu can also link the pages. Images can also be included in the page by inserting the images from insert option of the main menu.


Literature search

Im (PubMed)

BiomedNet Search

Electronic Journals

Patent Search
Patent and Know-how Information (NIC)

US Patent Search

European Patents
<http://www.european> Patents
Important Databases in Biology

Nucleotide Sequences


Protein Sequences

PIR: <>
PDB: <>
SCOP: <>
ProDom : <>

Other Databases


Public Domain Resiurces in Biology


Distribution procedure
Web Browser:
Ftp client: ftp://