Information and Communication Technology (Study Material)

Important Points

  • PASCAL programming language was designed by Niklaus Wirth in 1971.
  • FORTAN was developed by a team of programmers by IB led by John Backus in 1957.
  • BASIC was designed by John G. Kemeny and Thomas E. Kurtz in 1964
  • The initial work on the design of COBOL was stated in 1959 under the leadership of Grace Hopper.
  • C language was developed in 1972 at AT and T’s bell laboratories, USA by Dennis Ritchie and Brain Kernighan. A new version of C++ was developed by Bjarne Stroustrup at Bell Labs in the early 10980s.
  • Java another high level language was developed by a team led by James Gosling. The language was formaly announced in May 1995 and its first commercial release was made in early 1996. Java uses the concept of just in time compilation.
  • RPG another high level language which stands for Report Program Generator. IBM developed the language and launched in 1961 for use on the IBM 1401 computer.
  • SNOBOL stands for String Oriented Symbolic Language. It is another language for non numeric applications.
  • LISP stands for List     It was developed  in  1959 by John  McCarthy   of MIT. His goal was to develop  a language  that  is good at manipulating   non-numeric  data  such  as symbols  and strings  of text.  Such data  handling capability  is needed  in compiler  development,   and in Artificial  Intelligence  (AI) applications.
  • Operating System (OS)

An operating system is a collection of programs which controls the overall operation of a computer. The operating system acts as an intermediary between the user and computer. The role of operating system is similar to the head of the family or a manager of a company. The key functions of Operating System are :

i. It provides an environment in which users and application software can do work.

ii. It manages different resources of the computer like the CPU time, memory spacec, file storage, I/O devicces etc. During the use of coputer by other programs or users, operating system manages various resources and allocates them whenever required, efficiently.

iii. It controls the execution of different programs to prevent occurance of error.

iv. It provides a convenient interface to the use in the form of commands and graphical interface, which facilitates the use o computer.

Some available operating systems are WINDOWS 7, Windows XP, Linux, Mac OS X Snow Leopard, Microsoft Disk Operating System (MS-DOS).

Lets discuss some popular OS

UNIX

Unix  is a family of multitasking, multiuser computer operating systems that derive from the original AT&T Unix, development starting in the 1970s at the Bell Labs research center by Ken Thompson, Dennis Ritchie, and others. It was  the first operating  system  to be written  in a high-level  language,  C.

MS-DOS

MS DOS was develop by Microsoft Inc. in 1981 and has been the most widely used operating systems of IBM compatible microcomputers. Because  of its popularity,  Microsoft  later took a decision  to launch  independently   Microsoft  Windows  operating  system  in 1990s.

Structure of MS DOS

MS DOS  is partitioned into m any layers. These layers segregate the kernel logic of the operating system the user’s idea of the system, from the hardware on which it is being run. These layers are :

i. BIOS (basic input/output system)

ii. DOS kernel

iii. Command Processor (Shell)

i. BIOS

Every computer system comes with its own copy of BIOS. The manufacturer of the comuter system provides it. It holds the default resident hardware dependent drivers for the following device :

  • Console display and keyboard (CON)
  • Date and time
  • Line Printer (PRN)
  • Boot disk device (Clock Device)
  • Auxiliary device (Aux)
  • Microsoft Windows

 

ii. DOS kernel

The DOS kernel is most used by application programs. It is provided by Microsoft Corporation itself and contains large number of hardware independent service. These services are called system functions. It performs following functions:

  • File management
  • Record management
  • Memory management
  • Character device input/output
  • Access to the real time clock

iii. Command processor

One of the fundamental tasks of a shell is to load a program into memory on request and pass control of the system to the program so that the program can execute. When the program terminates, control returns to the shell, which prompts the user for another command. In addition, shell usually includes functions for file and directory maintenance and display. In theory, most of these functions could be provided as programs, but making them resident in the shell allows them to be accessed more quickly. The tradeoff is memory space versus speed and flexibility. Early microcomputer based operating systems provided a minimal number of resident shell commands because of the limited memory space; modern operating systems such as MS-DOS include a wide variety of these functions as internal commands.

Microsoft Windows

Microsoft   Windows  operating   system  was  developed   by Microsoft   to  overcome  the  limitations  of  its own  MS- DOS  operating   system.   Firstsuccessful     version  of this  operating   system  was  Windows   3.0,  released   in  1990. Subsequently   released  versions  were  Windows   95,  Windows   98,  Windows   2000,  Windows   XP,  Windows   XP Professional,   and Windows  Vista. The numbers  associated  with  some of these  released  versions  indicate  their year of release.  It is a single-user,  multitasking   operating  system.  That  is, a user may run more than  one program  at a time.

Microsoft Windows NT

Microsoft  Windows  NT(new technology) is a 32 bit multi-user,  timesharing  operating  system  developed  by Microsoft.   It was designed  to have  UNIX-like  features  so that  it can be used  for powerful  workstations,   networks,  and  database  servers.  Like UNIX, Linux,   Windows  NT and  its subsequent  versions  has native  support  for networking   and  network  services. Such operating  systems  are classified  as Network  Operating  System  (NOS). Unlike  UNIX,  its native  interface  is a GUI.

LINUX

Linux  is a Unix-like computer operating system assembled under the model of free and open-source software development and distribution. Linux was originally developed for personal computers based on the Intel x86 architecture, but has since been ported to more platforms than any other operating system.

Operating System Techniques

Multiprogramming

It is thee process by which CPU works on two or more programs, simultaneously. Examples of operating system supporting multiprogramming are: OS/2, UNIX and Macintosh System 7.

Multiprocessing

It refers to the use of two or more CPU’s to perform a coordinated task simultaneously. For example, MVS, VMS and Windows NT.

Multitasking

It refers to the ability of an operating system to execute two or more tasks concurrently. In multitasking environment, the user open new applications without closing the previous ones and the information can be easily moved among a number of applications.

Multithreading

Threads are popular way to improve application performance. In traditional  operating  systems,  the basic unit of CPU utilization  is a process.   Each  process  has its own program  counter,  its own register  states,  its own stack, and its  own  address   space  (memory   area  allocated   to  it).    On  the  other  hand,  in  operating   systems,   with  threads facility,  the basic  unit of CPU utilization  is a  thread.   In these  operating  systems,  a process  consists  of an address counter,  its own register  states,  and its own  stack.

Internet

The Internet is a global and grand computer network, called the network of networks. The internet is a global TCP/IP based network. It links a large number of autonomous systems, intranets, internets, LANs, MANs and WANs.

The Internet has its root in the ARPANET   system  of  the  Advanced   Research   Project  Agency   of  the  U.S. Department of Defense.  ARPANET was the first WAN and had only four sites in 1969. The internet evolved from basic ideas of ARPANeT for interconnecting computers was used by research organisations and universities initially to share and exchange information. In 1989 U.S. Government lifted restrictions on use of internet and allowed it to be used for commercial purposes as well. Since then, the internet has grown rapidly to become the world’s largest network.

Who Manages Internet?

Some groups have been formed to take care of the shared resources of Internet. One such body is called IAM (Internet Architectire Board), earlier called Internet Activities Board, as named by ARPA.

There are two main wings to this Board:

i. IETF (Internet Engineering Task Force)

ii. IRTF (Internet Research Task Force)

iii. IAB (Internet Architecture Board)

IETF consists consists of network designers, operators, vendors and researchers concerned with the evolution of internet architecture and he smooth operation of Internet. IETF is responsible for defining standards and does a documentation of internet known as RFC (Request for Comments).

IRTF looks into running term research problems, many of which are at times critical to the Internet.

IAB oversees the IETF and IRTF. It also ratifies any major change to the Internet that comes from the IETF.

How internet Works?

All computers and other equipments within any given network are basically connected to each other with the help of cables. The messages travel across the network with the help of networking protocols. The protocols used over the internet provide addresses for the computers attached to the physical network. In this way, different types of networks communicate with each other using the same protocol. To interpret the information being transmitted it is essential that the right software and hardware be in place. The commonly used protocols are :

  • Internet Protocols (IP)
  • Transmission Control Protocol (TCP)
  • Together they are called as TCP/IP protocol.

Protocols

A protocol is a set of rules that governs data communications. Protocol defines the method of communication, how to communicate, when to communicate etc. Important elements of protocol are:

1. Syntax

2. Semantics

3. Timing

 

1. Syntax

Syntax means format of data or the structure how it is presented e.g. first eight bits are of sender address, next eight bits for receiver address and rest of the bits for message data.

2. Semantics

Semantics is the meaning of each section of bits e.g. the address bit means the route of transmission or final destination of the message.

3. Timing

Timing means, at which time data can be sent how fast data can be sent.

IP Addresses

AN IP address is a numeric identifier assigned to each machine on an IP network. IP address is a software address, not a hardware address, which is hard coded in the machine or NIC. An IP address is made up of 32 bits of information. These bits are divided into four parts containing 8 bit each. The 32 bit IP address is a structured or hierarchial address. The network address uniquely identifies each network. Every machine on the same network shares that network address as part of its IP address. The IP address 121.45.67.45, the 121.45 is the network address and 67.45 is the node address. The node address is assigned  to and uniquely identifies, each machine on a network. The router might able to speed a packet on its way after reading only the first bits of address.

Hosts

Each host on the internet has a unique TCP/IP address, which is four numbers separated by dots. An example address is 123.45.54.32 which is the TCP/IP address of a computer.The TCP/IP address known as a physical address. All the four numbers will be in the range 0-256. These don’t mean much to users, so logical names are allocated to host computers as well, 123.45.54.32 may also be known as xli.cet.ac.au. Communication is established between a user and da host computer by using the TCP/IP address. Data is sent from the user with the destination address being as the host computer. The host computer, when sending data back to the user, specifies the destination address of the user.

Servers

Servers are host computers which provide a service to users. An example of server’s service could be the storage and retrieval of files and documents. Other types of servers are www (world wide web) servers, ftp (file transfer protocol) servers, gopher servers, mail servers and news servers. Each server uses a specific protocol or method of communication based on TCP/IP. For example, www servers use the HTTP protocol, mail servers uses SMTP protocol, and news servers use NNTP protocol.

Hyper-links

An hyperlink is a clickable link to another document or resource. It is normally shown in blue underline. When a user clicks on this link, the client will retrieve the document associated with that link, by requesting the document from the designed server upon which it resides.

Example of hyperlink : http://www.simplinotes.com/

URL’s

A uniform Resource locator is a means of specifying the path name for any resource on the internet or an intranet. It consists of three parts:

A protocol

A host part

A document name

For example : http://www.cet.ac.nz/smac/ckware.htm  specifies he protocol as http, the host or www server as www.cet.ac.nz and the document as /smac/csware.htm.

File Transfer Protocol (FTP)

File Transfer  Protocol  service  (known  as FTP  in short) enables  an Internet  user to move a file from one computer to another  on the  Internet.  A file  may contain  any type of digital  information   –  text  document,   image,  artwork, movie,   sound,   software,   etc.   Moving   a  file  from  a  remote   computer   to  ones  own   computer   is  known   as downloading  the file, and moving  a file from ones own computer  to a remote  computer  is known as uploading  the file.

By using FTP service,  a file transfer  takes place in following  manner:

  1. A user  executes   the ftp   command   on  his/her   local  computer,   specifying   address   of  the  remote computer  as a parameter.
  2. An FTP process running  on user’s  computer  (called  FTP client process establishes     a connection  with an FTP process  running  on remote  computer  (called  FTP server process).
  3. The system  then  asks  the user  to enter  his/her  login  name  and  password  on the  remote  computer  to ensure that the user is authorized  to access the remote computer.
  4. After successful login, desired  file(s)  are downloaded  or uploaded  by using get (for downloading)  and put  (for  uploading)   commands.   User  can  also  list  directories,   or  move  between   directories   of  the remote computer,  before  deciding  which file(s) to transfer.

HTML (Hypertext Markup Language)

It is a computer language to prepare Web pages. Hypertext is a text with extra features like formatting, images, multimedia and links to other documents. Markup is the process of adding extra symbols to ordinary text. Each symbol which isused in HTML has its own syntax, slang and rules. It is not a programming language. It is markup language. It classifies the parts of a document according to their function . in other words it indicates which part is title, which part is a subheading, which part is the name of the author, and so on.

Telnet

Telnet  service  enables  an Internet  user to log in to another  computer  on the Internet  from  his/her  local computer. That  is, a user  can  execute  the  telnet  command   on  his/her  local  computer  to  start  a login  session  on  a remote computer.  This action  is also called  “remote  login.” To start a remote  login  session,  a user types  the command  telnet  and address  of the remote  computer  on his/her local  computer  terminal.  The  system  then  asks the user to enter  a login  name (user  ID) and a password.  That  is; the remote  computer  authenticates   the user to ensure  that  he/she  is authorized  to access  it. If the user  specifies  a correct  login name  and password,  he/she  is logged  on to the remote  computer.  Once  login  session  is established with the remote  computer,  telnet  enters  input mode,  and anything  typed  on the terminal  of the local computer  by the user is sent to the remote computer  for processing.

Some common  uses of telnet   service  are:

  1. Using  computing   power   of  remote   computer.   The  local  computer   may  be  an  ordinary   personal computer  and the remote  computer  may be a powerful  super computer.
  2. Using a software  on remote  computer.   A software  that  a user wants  to  use may not  be available  on his/her computer.
  3. Accessing remote  computer’s   database  or archive.  An information  archive  of interest  to a user   such as public database  or library resources  may be available  on the remote computer.                            ‘
  4. For logging  in to  ones  own  computer   from  another  computer.   For  example,   if a user  is attending  a conference   in another  city  and  has access  to  a computer  on the  Internet,  he/she  can telnet  to his/her own computer  and read his/her  electronic  mails or access  some information  stored there.

Usenet News

Usenet service enables  a group of ’Internet   users to exchange  their views/ideas/information    on some common  topic of  interest  with all  members   belonging   to  the  group.  Several  such  groups  exist  on the  Internet  and  are  called news groups .  For  example,  a newsgroup   named  comp.security.mise    consists  of users  having  interest  in computer security issues.                                                                                                                                        .

A newsgroup   is like a large notice  board accessible  to all members  belonging  to the group. A member,  who wants to  exchange  his/her  views/ideas/information     with  other   members,   creates   a  specially   formatted   message   and submits it to the usenet  software  running  on his/her  own computer.  The software  posts the message  on the virtual notice  board.  The  posted  message   can  .e read  (seen)   from  any  member’s    computer   belonging   to  the  same newsgroup,  Just as a notice posted  on a notice  board can be read by anyone having access  to the notice  board.

The World Wide Web  

 The  World  Wide  Web (called  WWW or  W3 in short)  is the most popular  and promising   method  of accessing  the Internet.  Main reason for its popularity  is use of a concept  called hypertext.  Hypertext  is a new way of information storage   and  retrieval   that   enables   authors   to  structure  information   in  novel   ways.   An  effectively   designed hypertext  document  can help   locate desired  type of information  rapidly from vast amount  of information  on the Internet. Hypertext   documents  enable  this  by using  a series  of  links.  A  link  is shown  on screen  in multiple ways such as a labeled  button,  highlighted  text, or different  color text than  normal  text  if your computer  has color display,  or author-defined   graphic  symbols.  A link  is a special  type  of item  in a hypertext  document  connecting the document  to another  document  that provides  more  information  about the linked  item. The latter document  can be anywhere  on the Internet (in the same document  in which the linked  item is, on the same computer  in which the former  document   is, or on another  computer  at the other  end of the world).  By “connect”,   we  mean  that  a user simply  selects  the  linked  item (using  a mouse  or key command)  and the  user sees the other  document  on his/her computer  terminal  almost  immediately. The  WWW  uses client server model     and  an  Internet  Protocol  called  Hypertext   Transport  Protocol   (HTTP  in short)  for interaction  between computers   on the Internet. Any computer  on the Internet  using the  HTTP protocol, is called a Web Server,  and any computer  accessing  that  server  is called  a Web Client.  Use of client-server   model and the HTTP allows different  kinds of computers  on the Internet  to interact with each other.  For example.  a Unix workstation  may be a web server  and a Windows  PC may be a web client,  if both of them  use the HTTP  protocol for transmitting  and receiving  information.

HTTP (Hypertext Transfer Protocol)

HTTP is short form for Hypertext Transfer Protocol It is set of rules, or protocol, that governs the transfer hypertext between two or more computers The World Wide Web encompasses the universe of information this available via HTTP.

Hypertext is text that is specially coded using a standard system called Hypertext Markup Language (HTML). The HTML codes are used to create links These links can be textual or graphic, and when clicked o they can “link” the user to another resource such as other HTML documents, text files, graphics, animation and sound.

HTTP is based on the client/server principle. HTTP allows “Computer A” (the client) to establish connection to “Computer B” (the server) and make a request. The server accepts the connection initiated I the client and sends back a response An HTTP request identifies the resource that the client is interested and tells the server what “action” to take on the resource/data. This data can be about anything, including HTML, text, images, programs, and sound.

When a user selects a hypertext link, the client program on their computer uses HTTP to contact the server, identify a resource, and ask the server to respond with an action. The server accepts the request, and then us-HTTP to respond to or perform the action The basic functions are:

  • Reading a Web Page
  • Scripting on Websites
  • Loading Markup Pages
  • Acting Like a Browser
  • Posting CGI Requests

HTTP also provides access to other Internet protocols like File Transfer Protocol (FTP),  Simple Mail Transfer Protocol (SMTP), Network News Transfer Protocol (NNTP), WAIS, Gopher, Telnet etc.

WWW Browsers

To be used as a web client,  a computer  needs  to be loaded  with  a special  software  tool  known  as WWW browser (or  browser   in short).  Browsers  normally  provide  following   navigation   facilities  to  help  users  save  time  while Internet  surfing (process  of navigating  the Internet to search for useful  information):

i. Unlike FTP and Telnet,  browsers  do not  require  a user to  log in to a server  computer  remotely,  and then to log out again when the user has finished  accessing  information  stored on server computer.

ii. Browsers enable  a user to visit a server  computer’s   site directly  and access  information  stored on it by specifying   its  URL  (Uniform   Resource   Locator) address.   URL  is an  addressing   scheme   used  by WWW browsers  to locate sites on the Internet.

iii. Browsers enable  a user to create  and maintain  a personal hotlistof favorite  URL addresses  of server computers  that the user is likely to visit  in future frequently.  A user’s  hotlist  is stored  on his/her  local web client computer.  Browsers  provide  hotlist  commands  to enable  a user to add, delete, .update URL addresses  in hotlist,  and  to select  an  URL  address  of a server  computer  from  hotlist,  when  the  user wants to visit the server computer.

iv. Many browsers   have  a  “history”   feature.   These   browsers   maintain   a  history  of  server  computers visited  in a surfing  session.  That  is, they save (cache)  in local computer’s   memory,  the URL addresses of server computers   visited  by a user during  a surfing  session,  so that  if the user wants  to go back to an  already  visited  server  later  on  (in  the  same  surfing  session),   the  link  is  still  available   in  local computer’s   memory.

v. Browsers enable  a user  to  download   (copy  from  a server  computer   to  local  computer’s    hard  disk) information   in  various  formats  (i.e.,  as  a  text  file,  as an  HTML  file,  or  as  a  PostScript   file).  The downloaded   information ‘can  be later  (not  necessarily   in the same  surfing  session)  used  by the  user. For example,  downloaded   information  saved  as a PostScript  file can be later printed  on a PostScript- compatible  printer,  where even graphics  will be reproduced  properly.

Domains

Servers or host computer are arranged according to geographical location. For instance all countries suffix, except the USA.  New Zealand suffix is nz, while Canada’s is ac. Typically host part of the URL looks like server name, organization name, type of organization and country name. example the server www.cetz.ac.nz defines it as a host called www, belonging to an organization called cet, which is an academic institution located in New Zealand. Similarly the server www.simplynotes.in  defines it as a host called www, belongings to an organization called simplynotes, which is commercial organization located in India.

DNS (Domain Name System)

The DNS is a distributed database that resides on multiple machines on the internet and used to convert between names and address and to provide e-mail routing information. DNS provides the protocol that allows the client and serves to communicate with each other. The domain name is a sequence of labels separated by dots (.)

Intranet

Intranet is a private computer network that is maintained by an organization for internal communications. It uses some of the technologies that Internet uses like the protocols, software and servers but cannot be viewed by an unauthorized person outside the organization. It uses different methods to ensure security over the network such as access control list and so on. It is a cheap, fast and reliable networking system that links offices around the world, making it easy for the users working in a company to communicate with one another and also to access information resources from Internet.
The Intranet can be LAN based or WAN based, depending on how big is the organization network. An intranet is built from the same concepts and technologies as used for the Internet, such as client-server computing and the Internet Protocol Suite (TCP/IP). Any of the well known Internet protocols may be found in an intranet, such as HTTP (web services), SMTP (e-mail), and FTP (file transfer). The Internet technologies are often deployed to provide modern interfaces to legacy information systems hosting corporate data. Some of the characteristics that the intranet provide to organisations is as follows:
1. The Intranets allow its users to share the data and workspace, which help users to locate and view information faster as per the authorization levels. It also helps to improve the services provided to the users.
2. The Intranets serve as powerful communication tool within an organisation, among its users across levels, across locations, and across projects.
3. The Intranet helps in electronic communication, for example it allows implementing electronic mode of communication as compared to traditional paper based communication. The web based communication is more effective in terms of cost, effectiveness and, efficiency as compared to older systems.
4. The Intranets are also being used as a platform for developing and deploying applications to support business operations, and decisions across the Internet worked enterprise.
5. It also helps in maintaining the transparent communication culture by sharing the same information within the intranet.
6. With information easily accessible by all authorized users, teamwork is enabled.
7. It helps in integrating wide variety of hardware, software, applications across the organisation network.
8. For productivity the Intranet technology provides fast information to employees and helps to perform their various tasks with responsibility.
9. An important benefit of Intranet is that it is cost-effective, thus it helps to reduce costs significantly.
10. While incorporated and distributed for computing environment. Intranet supports an active distribution of stored information through different types of operating systems and computer types, from which the employees can access information.
11. The Intranet results in distributing information at a lower cost due to the web architecture feature of intranet. As Intranet allows all employees to access data, this helps build team work within the organisation and thus create a platform for collaboration.
12. The Intranet helps in sharing human resource policies and procedures.
13. It acts as a corporate information repository provides employees access to the company strategy and key business objectives.
14. It remains a powerful tool for knowledge exchange, or facilitation – an area where employees can post questions and seek answers.

Extranet

The Extranet is an extended intranet. It is a private-private network between the defined users of two or more organisations. The Extranet allows interconnecting two or more intranets using Internet technologies. An extranet is a private network that uses Internet protocols, network connectivity, and possibly the public telecommunication system to securely share part of an organisation’s information, or operations with suppliers, vendors, partners, customers, or other businesses. An extranet can be viewed as part of a company’s intranet that is extended to users outside the company, normally over the Internet. The concept became popular when organizations outside the company, normally over the Internet. The concept became popular when organisations started building long-term relationship with its business partners. It has also been described as a ‘state of mind’ in which the Internet is perceived as a way to do business with a pre-approved set of other companies business-to-business (B2B), in isolation from all other Internet users. In contrast, business-to-consumer (B2C) involves known server(s) of one or more companies, communicating with previously unknown consumer users.
Briefly, an extranet can be understood as an intranet mapped onto the public The Internet or some other transmission system not accessible to the general public, but managed by more than one company’s administrator(s). Some of the characteristics of extranet are as follows:
1. It allows exchange of large volumes of data between business partners.
2. It allows organizations to collaborate for joint business development.
3. It offers specially negotiated services for the employees from different service providers like insurance, loans, etc.
4. It shares industry news and events with industry users.
5. It can be expensive to implement and maintain within an organization.
6. The security of extranets can be a concern.

E-Mail

Electronic  mail  service  (known  as e-mail  in short)  enables  an  Internet  user to  send  a mail  (message)  to another Internet  user  in any part  of  the  world  in a near-real-time   manner.  An  e-mail  message  takes  a few  seconds  to several   minutes to     reach   its  destination,   because   it  travels   from  one  network   to  another, until it reaches   its destination.

Functions of E-Mail

  1. Composition of messages
  2. Transfer of messages
  3. Reporting – What messages have been sent and what cannot be delivered.
  4. Displaying
  5. Disposition – Delete received messages, save received messages in a folder and keep received messages in user’s mailbox.

Audio Conferencing

Audio conferencing is the conduct of an audio conference (also called a conference call or audio teleconference) between two or more people in different locations using a series of devices that allow sounds to be sent and received, for the purpose of communication and collaboration simultaneously.

An audio conference may involve only two parties, or many parties involved at the same time. Audio conferencing can be conducted either through telephone line or the Internet by using devices such as phones or computers.

Video Conferencing

Video Conferencing enables direct face-to-face communication across networks. A video conferencing system has two or more parties in different locations, which have the ability to communicate using a combination of video, audio, and data. A video conference can be person to person (referred to as “point-to-point”) or can involve more than two people (referred to as “multipoint”) and the video conferencing ter­minals are often referred to as “endpoints”.

In this form of meeting, participants in re­mote locations can view each other and carry on discussions via web cameras, microphones, and other communication tools. The following five elements are common to all video conferencing endpoints:

  1. Camera: The camera captures live images to send across the network.
  2. Visual Display: It displays the images of the people taking part in the videoconference.
  3. Audio System: It includes both microphones to capture audio from the endpoint and loudspeakers to play back the audio received from other endpoints across the network connection.
  4. Compression: Videos are very bandwidth-intensive and they take a long time to load. Therefore, video systems include technologies, often referred to as codecs, to compress and decompress video and audio data, allowing transmission across a network connection in near-real time.
  5. User Interface and Control System: The user interface allows the users to control interactions, for example, placing calls, storing and locating numbers, and adjust environment settings such as volume. The control system handles the underlying communication that takes place between endpoints.

Video conferencing has many benefits, as a tool for both teaching and learning. A key factor is that it provides real-time, visual communication, unlike other communications methods such as e-mail. Video conferencing technology is still in its infancy, and one of its major limitations is the bandwidth (the volume of information per unit time that a computer or transmission medium can handle) available on the Internet. As the protocols and applications for video conferencing develop higher resolution and improved speed, participation will increase.

History of Video conference

The first concepts of video conferencing were developed in the 1870s, as part of an extension of audio devices. The first actual developments of the video telephone began in the late 1920s with the AT&T company Bell Labs and John Logie Baird. AT&T experimented with video phones in 1927. In the 1930s, early video conferencing experiments were also conducted in Germany. This early technology included image phones that would send still pictures over phone lines. In the early 1970s, AT&T started using video conferencing with its Picturephone service. However, the widespread adoption of video conferencing really began in the 1980s with the computer revolution. Transmitting video images became practical for personal use once the data communications components were in place, such as the advent of video codecs along with the rise of broadband services such as ISDN. The mobile phone craze helped fuel the popularity of video conferencing.

Webcams began to appear in the early 1990s on university campuses. The first commercial webcam, introduced on the market in August 1994, was called QuickCam, which was compatible with Mac. A PC version was released the following year. Time Magazine named QuickCam one of the top computer devices of all time in 2010.

CU-SeeMe video conferencing software played an important role in the history of video conferencing. It was developed by Cornell University IT department personnel for Mac in 1992 and Windows in 1994. CU-SeeMe helped usher in the first internet radio stations. It was released commercially in 1995. In 2004, many businesses started adopting video conferencing systems for the first time since broadband technology was finally more affordable and widespread.

Scroll to top
You cannot copy content of this page. The content on this website is NOT for redistribution