Information and Communication Technology (Study Material)

Types of Processor

i. CISC Processors

One  of  the  earlier   goals   of  CPU   designers    was  to  provide    more   and  more   instructions     in  the  instruction     set  of  a CPU  to ensure   that  the  CPU   supports    more  functions    directly.   This  makes   it easier   to  translate    high-level    language programs    to  machine    language    and  ensures   that  the  machine    language    programs    run  more   effectively.     Of  course, every  additional    instruction    in the  instruction    set  of a CPU  requires   the  necessary    hardware    circuitry   to  handle   that instruction,     adding    more   complexity     to  the   CPU’s    hardware     circuitry.     Another    goal   of  CPU   designers     was   to optimize    the  usage   of expensive    memory.    To  achieve   this,  the  designers    tried  to pack  more   instructions     in memory by  introducing    the  concept   of variable  -length   instructions     such  as half-word,    one-and-half-word,       ‘etc.  For  example, an   operand     in   an   immediate      instruction     needs    fewer    bits   and   can   be   designed     as   a   half-word      instruction. Additionally,     CPUs   were   designed    to  support   a variety   of  addressing    modes   (discussed     later  in this  chapter   during the   discussion      of   memory).     CPUs    with    large    instruction      set,   variable-length        instructions,      and   a   variety    of addressing      modes    are   said   to   employ     CISC  (Complex   Instruction   Set  Computer)   architecture.    Since    CISC processors     possess    so  many   processing     features,    they   make   the  job   of  machine     language    programmers      easier. However,    they  are  complex    and  expensive    to produce.    Most   personal    computers    of today   use  CISC  processors.

ii. RISC Processor

In early 1980s, some CPU designers realized that many instructions supported by a CISC based CPU are rarely used. Hence, an idea evolved that the design complexity of CPU can be reduced greatly by implemented only bare minimum basics of instructions and some of the more frequently used instructions in the hardware circuitry    of the  CPU.   Other   complex    instructions     need  not  be  supported    in the  instruction     set  of  the  CPU   because they  can  always   be  implemented     in software   by  using  the  basic   set  of  instructions.     While   working    on  simpler   CPU design,    the   designers     also   came   up  with   the   idea   of  making    all  the   instructions     of  uniform    length   so  that   the decoding  and execution  of all instructions  becomes  simple  and fast. Furthermore,   to speed  up computation  and to reduce  the  complexity   of handling  a number  of addressing   modes  they  decided  to  design  all the  instructions   in such  a way that  they  retrieve  operands  stored  in registers  in CPU  rather  than  from  memory.  These  design  ideas resulted   in  producing   faster  and  less  expensive   processors.   CPUs  with  a  small  instruction   set,  fixed-length instructions,  and reduced  references  to memory  to retrieve  operands  are said to employ RISC (Reduced  Instruction Set  Computer)   architecture.   Since  RISC  processors   have  a  small  instruction   set,  they  place  extra  demand  on programmers   who  must  consider   how  to  implement   complex   computations   by  combining   simple  instructions. However,   RISC  processors   are  faster  for  most  applications,   less  complex,   and  less  expensive   to  produce  than CISC processors  because  of simpler  design.

iii. EPIC Processors

The  Explicitly   Parallel   Instruction    Computing    (EPIC)   technology    breaks   through   the   sequential   nature   of conventional   processor  architectures   by allowing  the  software  to  communicate   explicitly  to the  processor  when operations   can  be done  in parallel.  For this,  it uses  tighter  coupling  between  the  compiler  and the  processor.  It enables   the  compiler   to  extract   maximum   parallelism   in  the  original   code  and  explicitly   describe   it  to  the processor.  Processors  based  on EPIC  architecture   are simpler  and  more  powerful  than  traditional  CISC  or RISC processors.   These  processors   are  mainly  targeted   to  next-generation,    64-bit,  high-end   server  and  workstation market (not for   personal  computer  market).

iv. Multicore Processor

Till  recently,   the  approach   used  for  building   faster  processors   was  to  keep. reducing   the  size  of  chips  while increasing  the  number  of  transistors   they  contain.   Although,   this  trend  has  driven  the  computing   industry  for several  years;  it has now been realized  that transistors  cannot  shrink  forever.  Current  transistor  technology  limits the ability to continue  making  single-core  processors  more powerful  due to following  reasons:

i. As a transistor gets smaller,  the gate, which  switches  the electricity  ON and OFF, gets thinner  and less able  to block  the  flow  of electrons.  Thus,  small  transistors  tend  to use  electricity  all the time,  even when they are not switching,  This wastes  power.

ii. Increasing clock  speeds  causes  transistors  to switch  faster  and generate  more heat and consume  more power. These  and  other  challenges   have  forced  processor   manufacturers   to  research  for  a  new  approach  for  building faster  processors.  In the  new  architecture,   a  processor   chip  has  multiple cooler-running,   more  energy-efficient   processing  cores  instead  of one  increasingly   powerful  core.  The multicore chips  do  not  necessarily   run  as  fast  as  the  highest   performing   single-core   models,   but  they  improve  overall performance   by handling  more  work  in parallel.  For  instance,  a dual-core   chip  running  multiple  applications   is about  1.5 times faster than a chip with just  one comparable  core.

iii. The  operating   system  (OS)  controls  the  overall   assignment   of  tasks  in  a  multicore   processor.   In  a  multicore processor,   each  core  has  its  independent   cache  (though  in  some  designs  all  cores  share  the  same  cache)   thus providing. the  OS  with  sufficient  resources  to  handle  multiple  applications   in parallel.  When  a single-core   chip runs multiple  programs,  the OS assigns  a time slice to work on one program  and then assigns  different  time slices for other  programs.   This  can  cause  conflicts,  errors,  or  slowdowns   when  the  processor   must  perform  multiple tasks simultaneously.   However,  multiple  programs  can be run at the same time on a multicore  chip with each core handling  a separate  program.  The same  logic holds  for running  multiple  threads  of a multithreaded   application  at the  same time  on a multicore  chip  with  each  core  handling  a separate  thread.  Based  on this,  either  the  OS or a multithreaded  application  parcels  out work to the multiple  cores.

Advantages of Multicore Processors

i. They enable building  of computers  with better  overall  system  performance   by handling  more work  in parallel.

ii. For comparable performance,   multicore  chips consume  less power  and generate  less heat than  single- core  chips.   Hence,   multicore   technology   is  also  referred   to  as  energy-efficient    or  power-aware processor   technology.

iii. Because the  chips’   cores  are  on the  same  die  in case  of  multicore  processors   architecture,   they  can share architectural   components,   such as memory  elements  and memory  management.   They thus have fewer   components    and   lower   costs   than   systems-  running   multiple    chips   ‘(each   a   single-core processor).

iv. Also, the signaling between  cores can be faster and use less electricity  than on multichip  systems.

Limitations of Multicore Processors

i. To take advantage  of multicore  chips,  applications   must  be redesigned   so that  the processor  can run them as multiple  threads.  Note that it is more challenging  to create  software  that is multithreaded.

ii. To redesign applications,   programmers   must find good places  to break  up the applications,   divide the work  into  roughly  equal  pieces  that  can  run  at the  same  time,  and  determine   the  best  times  for the threads  to communicate   with one another.  All these add to extra work for programmers.

iii. Software vendors  often  charge  customers  for each  processor  that  will run the software  (one  software license  per processor).  A customer  running  an application  on an 8-processor  machine  (multiprocessor computer)  with single-core  processors  would  thus pay for 8 licenses.  A key issue with multicore  chips is  whether   software   vendors   should  consider   a  processor   to  be  a  single  core  or  an  entire  chip. Currently,  different  vendors  have different  views  regarding  this  issue.  Some consider  a processor  as a unit that plugs  into a single socket on the motherboard,  regardless  of whether  it has one or more cores. Hence,  a single  software  license  is sufficient  for  a multicore  chip.  On the  other  hand,  others  charge more to use their  software  on multicore  chips for per-processor   licensing.  They are of the opinion that customers   get added  performance   benefit  by running  the  software  on a chip  with  multiple  cores,  so they  should  pay more.  Multicore-chip   makers  are concerned  that this o/pe  of non-uniform  policy  will hurt their products  sales.

Chip  makers  like  Intel,  AMD,  IBM,  and  Sun have  already  introduced  multicore  chips  for  servers,  desktops,  and laptops.  The  current  multicore  chips  are  dual-core  (2 cores  per  chip),  quad-core  (4  cores  per  chip),  8 cores  per chip, and  16 cores per chip. Industry  experts  predict  that multicore  processors  will be useful  immediately  in server class machines  but won’t  be very useful  on the desktop  systems  until software  vendors  develop  considerably  more multithreaded   software.  Until this occurs,  single-core  chips will continue  to be used. Also,  since single-core  chips are inexpensive  to manufacture,  they will continue  to be popular  for low-priced  PCs for a while.

Scroll to top
You cannot copy content of this page. The content on this website is NOT for redistribution