Computer Graphics and multimedia, 4th semester MCA

Transcription

Computer Graphics and multimedia, 4th semester MCA
Survey of Computer Graphics
Graphics
Graphics are visual presentations on some surface, such as a wall, canvas, computer screen,
paper, or stone to brand, inform, illustrate, or entertain. Examples are photographs, drawings,
Line Art, graphs, diagrams, typography, numbers, symbols, geometric designs, maps,
engineering drawings, or other images. Graphics often combine text, illustration, and color.
Computer Graphics
Computer have become a powerful tool fot the rapid and economical production of picture. The
Graphics early application in Enginerring and Science.
Computer Graphics is the use of computers to display and manipulate information in graphical or
pictorial form, either on a visual-display unit or via a printer or plotter.
The term computer graphics includes almost everything on computers that is not text or sound.
Today nearly all computers use some graphics and users expect to control their computer through
icons and pictures rather than just by typing. The term Computer Graphics has several meanings:
•
•
•
the representation and manipulation of pictorial data by a computer
the various technologies used to create and manipulate such pictorial data
the images also produced
Today computers and computer-generated images touch many aspects of our daily life.
Computer imagery is found on television, in newspapers, in weather reports, education,
medicine, business, art and during surgical procedures. A well-constructed graph can present
complex statistics in a form that is easier to understand and interpret. Such graphs are used to
illustrate papers, reports, theses, and other presentation material. A range of tools and facilities
are available to enable users to visualize their data, and computer graphics are used in many
disciplines.
2D Computer Graphics
2D computer graphics are the computer-based generation of digital images mostly from twodimensional models, such as 2D geometric models, text, and digital images, and by techniques
specific to them. 2D computer graphics started in the 1950s.
2D computer graphics are mainly used in applications that were originally developed upon
traditional printing and drawing technologies, such as typography, cartography, technical
drawing, advertising, etc.. In those applications, the two-dimensional image is not just a
representation of a real-world object, but an independent artifact with added semantic value.
3D Computer Graphics
3D computer graphics in contrast to 2D computer graphics are graphics that use a threedimensional representation of geometric data that is stored in the computer for the purposes of
performing calculations and rendering 2D images.
Applications of Computer Graphics
Computer graphics become a power field for the production of pictures. There are no area in
which graphical displays can't be used to some advantages, so it is not surprising to find the use
of computer graphics so widespread. The applications of computer graphics are:
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Computer-aided design
Graphs & Charts
Representation Graphics
Computer simulation
Computer art
Entertainment
Education & Training
Graphic design
Information visualization
Scientific visualization
Video Games
Virtual reality
Web design
Image Processing
Graphical user Interfaces
Computer-Aided Design
Computer-aided design (CAD) is the use of computer technology for the design of objects, real
or virtual. The design of geometric models for object shapes, in particular, is often called
computer-aided geometric design (CAGD).
CAD may be used to design curves and figures in two-dimensional ("2D") space; or curves,
surfaces, or solids in three-dimensional ("3D") objects. CAD is also widely used to produce
computer animation for special effects in movies, advertising, technical manuals.
CAD is used in the design of tools and machinery and in the drafting and design of all types of
buildings, from small residential types (houses) to the largest commercial and industrial
structures (hospitals and factories). CAD is mainly used for design of buildings,
automobiles,aircraft,water craft, spacecraft, computer, textiles and many other products.It is also
used throughout the engineering process from conceptual design and layout of products, through
strength and dynamic analysis of assemblies to definition of manufacturing methods of
components.Software package for CAD application typically provide the designer with a
multiwindow environment. Animation are often used in CAD application which are used for
testing vehicle or system. CAD enables designers to lay out and develop work on screen, print it
out and save it for future editing, saving time on their drawings.
Graph & Charts
It is commonly used to summarize Financial, Statistical, Mathmatical, Scientic, Engineering and
Economic data for research report, Consumer information bulletins and another type of
publication.
Ex: Line graph, Bar charts, Piecharts, Surface Graphs are used in 2d or 3d or higher dimension
space.
Presentation Graphics
Another major application area is presentation graphics used to produce illustrations for reports
or to generate 35mm slides or transparencies for use with projectors.
Presentation graphics is commonly used to summeraize financial, statical,
mathematical,Scientific and economic data for research reports,managerial reports.
Ex:Bar chats,Line graphs,surface graphs,pie charts & other displays showing relationship
between multiple parameters.
Computer Simulation
A computer simulation, a computer model or a computational model is a computer program, or
network of computers, that attempts to simulate an abstract model of a particular system.
Computer simulations have become a useful part of mathematical modeling of many natural
systems in physics (computational physics), chemistry and biology, human systems in
economics, psychology, and social science and in the process of engineering new technology, to
gain insight into the operation of those systems, or to observe their behavior.
Computer Art
Computer graphics method are widely used in both fine art & commercial art application.Artists
use a variety of computer methods,including special purpose hardware,artists paint brush
programs.A paint brush program that allows artists to paint pictures on the screen of a video
monitor.Actually the picture is usually painted electronically on a graphics tablet using a stylus
which can simulate different brush storkes,brush widths and colors.
Entertainment
Computer graphics methods are now commonly used in making picture, music video &
television shows. Many Tv series regularly employ computer graphic methods. Music videos use
graphics in several ways.
Education & Training
A computer simulation, a computer model or a computational model is a computer program, or
network of computers, that attempts to simulate an abstract model of a particular system.
Computer simulations have become a useful part of mathematical modeling of many natural
systems in physics (computational physics), chemistry and biology, human systems in
economics, psychology, and social science and in the process of engineering new technology, to
gain insight into the operation of those systems, or to observe their behavior. For some training
application special system are designed.ex:Training of ship captains,aircraft,pilots,heavy
equipment operations air traffic control personnel.
Graphic Design
The term graphic design can refer to a number of artistic and professional disciplines which
focus on visual communication and presentation. Various methods are used to create and
combine symbols, images and/or words to create a visual representation of ideas and messages.
A graphic designer may use typography, visual arts and page layout techniques to produce the
final result. Graphic design often refers to both the process (designing) by which the
communication is created and the products (designs) which are generated. Common uses of
graphic design include magazines, advertisements, product packaging and web design.
Information Graphics
Information graphics or infographics are visual representations of information, data or
knowledge. These graphics are used where complex information needs to be explained quickly
and clearly, such as in signs, maps, journalism, technical writing, and education. They are also
used extensively as tools by computer scientists, mathematicians, and statisticians to ease the
process of developing and communicating conceptual information.Today information graphics
surround us in the media, in published works both pedestrian and scientific, in road signs and
manuals. They illustrate information that would be unwieldy in text form, and act as a visual
shorthand for everyday concepts such as stop and go.
Graphic Design
The term graphic design can refer to a number of artistic and professional disciplines which
focus on visual communication and presentation. Various methods are used to create and
combine symbols, images and/or words to create a visual representation of ideas and messages.
A graphic designer may use typography, visual arts and page layout techniques to produce the
final result. Graphic design often refers to both the process (designing) by which the
communication is created and the products (designs) which are generated. Common uses of
graphic design include magazines, advertisements, product packaging and web design.
Information Visualization
Information visualization is the study of the visual representation of large-scale collections of
non-numerical information, such as files and lines of code in software systems, and the use of
graphical techniques to help people understand and analyze data. In contrast with scientific
visualization, information visualization focuses on abstract data sets, such as unstructured text or
points in high-dimensional space, that do not have an inherent 2D or 3D geometrical structure.
Scientific Visualization
Scientific visualization ia a branch of science, concerned with the visualization of three
dimensional phenomena, such as architectural, meteorological, medical, biological systems. The
emphasis is on realistic rendering of volumes, surfaces, illumination sources, and so forth,
perhaps with a dynamic (time) component. Scientific visualization focuses on the use of
computer graphics to create visual images which aid in understanding of complex, often massive
numerical representation of scientific concepts or results.
Video Game
A video game is an electronic game that involves interaction with a user interface to generate
visual feedback on a video device(a raster display device). The electronic systems used to play
video games are known as platforms; examples of these are personal computers and video game
consoles. These platforms range from large computers to small handheld devices. The input
device used to manipulate video games is called a game controller. Early personal computer
games often needed a keyboard for gameplay, or more commonly, required the user to buy a
separate joystick with at least one button. Many modern computer games allow, or even require,
the player to use a keyboard and mouse simultaneously.
Virtual Reality
Virtual reality (VR) is a technology which allows a user to interact with a computer-simulated
environment. Most current virtual reality environments are primarily visual experiences,
displayed either on a computer screen or through special or stereoscopic displays, but some
simulations include additional sensory information, such as sound through speakers or
headphones. Some advanced, haptic systems now include tactile information, generally known as
force feedback, in medical and gaming applications. Users can interact with a virtual
environment or a virtual artifact (VA) either through the use of standard input devices such as a
keyboard and mouse, or through multimodal devices such as a wired glove, the Polhemus boom
arm, and omnidirectional treadmill. The simulated environment can be similar to the real world,
for example, simulations for pilot or combat training, or it can differ significantly from reality, as
in VR games. In practice, it is currently very difficult to create a high-fidelity virtual reality
experience, due largely to technical limitations on processing power, image resolution and
communication bandwidth. However, those limitations are expected to eventually be overcome
as processor, imaging and data communication technologies become more powerful and costeffective over time.
Virtual Reality is often used to describe a wide variety of applications, commonly associated
with its immersive, highly visual, 3D environments. The development of CAD software,
graphics hardware acceleration, head mounted displays, database gloves and miniaturization
have helped popularize the notion.
Web Design
Web design is the skill of designing presentations of content (usually hypertext or hypermedia)
that is delivered to an end-user through the World Wide Web, by way of a Web browser. The
process of designing Web pages, Web sites, Web applications or multimedia for the Web may
utilize multiple disciplines, such as animation, authoring, communication design, corporate
identity, graphic design, human-computer interaction, information architecture, interaction
design, marketing, photography, search engine optimization and typography.
In this, technologies include:
•
•
•
•
•
Markup languages (such as XHTML and XML)
Style sheet languages (such as CSS and XSL)
Client-side scripting (such as JavaScript and VBScript)
Server-side scripting (such as ASP.NET and VB.NET)
Database technologies (such as SQL and ORACLE)
Image Processing
In computer graphics a computer is used to create a picture, Image processing applies
techniques to modify or interpret existing pictures such as photographs and Tv scans.
Two application of image processing are (i)improving picture quality(ii)machine perception
of visual information. To apply image processing method we first digitize a photograph or
other picture into an image file.Then digital methods can be applied to rearrange picture
parts. Medical application also make extensive use of image processing techniques for
picture enhancement in tomography and in simulation of operations.
Graphical user Interfaces
It is common now for software packagesto provide a graphical interface.A major components
of a graphical interface is a window manager that allows a user to display multiple windows
areas.
Computer Graphics Display Device
CATHOD RAY TUBE
It is a vacuum tube evacuated glass which is large,deep,heavy.
CRT consists of: Electron gun,Electron beam, Focusing coils, Deflection coils, Anode connection, Shadow
mask, Phosphor layer, Close-up of the phosphor coated inner side of the screen.
Oparation:
A cathode ray tube (CRT) is a specialized vacuumtube in which images are produced when an electron
beam strikes aphosphorescent surface. Most desktop computer displays make useof CRTs. The CRT in a
computer display is similar to the"picture tube" in a television receiver.
A cathode ray tube consists of several basic components, as illustrated below. The electron gun
generates a narrow beam of electrons. The anodes accelerate the electrons. Deflecting coils produce an
extremely low frequency electromagnetic field that allows for constant adjustment of the direction of
the electron beam. There are two sets of deflecting coils: horizontal and vertical. The intensity of the
beam can be varied. The electron beam produces a tiny, bright visible spot when it strikes the phosphorcoated screen.
To produce an image on the screen, complex signals are applied to the deflecting coils, and also
to the apparatus that controls the intensity of the electron beam. This causes the spot to race
across the screen from right to left, and from top to bottom, in a sequence of horizontal lines
called the raster. As viewed from the front of the CRT, the spot moves in a pattern similar to the
way your eyes move when you read a single-column page of text. But the scanning takes place at
such a rapid rate that your eye sees a constant image over the entire screen.
The illustration shows only one electron gun.This is typical of a monochrome, or single-color,
CRT. However,virtually all CRTs today render color images. These devices havethree electron
guns, one for the primary color red, one for theprimary color green, and one for the primary color
blue. The CRTthus produces three overlapping images: one in red (R), one ingreen (G), and one
in blue (B). This is the so-called RGB colormodel.
In computer systems, there are several display modes, or setsof specifications according to which
the CRT operates. The mostcommon specification for CRT displays is known as SVGA
(SuperVideo Graphics Array). Notebook computers typically use liquid crystal display.The
technology for these displays is much different than that forCRTs.
All CRT's have three main elements: an electron gun, a deflection system, and a screen. The
electron gun provides an electron beam, which is a highly concentrated stream of electrons. The
deflection system positions the electron beam on the screen, and the screen displays a small spot
of light at the point where the electron beam strikes it.
Refresh CRT
A beam of electrons (cathode rays), emitted by an electron gun, passes through focusing and
deflection systems that direct the beam towards specified position on the phosphor-coated
screen. The phosphor then emits a small spot of light at each position contacted by the electron
beam. Because the light emitted by the phosphor fades very rapidly, some method is needed for
maintaining the screen picture. One way to keep the phosphor glowing is to redraw the picture
repeatedly by quickly directing the electron beam back over the same points. This type of display
is called a refresh CRT.
Basic Operation of a CRT
The basic operation of CRT is shown in figure below:
Electron Gun
The primary components of an electron gun in a CRT are the heated metal cathode and a control
grid. The cathod is heated by an electric current passed through a coil of wire called the filament.
This causes electrons to be boiled off the hot cathode surface. In the vacuum inside the CRT
envelope, negatively charged electrons are then accelerated toward the phosphor coating by a
high positive voltage. The accelerating voltage can be generated with a positively charged metal
coating on the in side of the CRT envelope near the phosphor screen, or an accelerating anode
can be used. Sometimes the electron gun is built to contain the accelerating anode and focusing
system within the same unit.
Focusing System
The focusing system is used to create a clear picture by focusing the electrons into a narrow
beam. Otherwise, electrons would repel each other and beam would spread out as it reaches the
screen. Focusing is acomplished with either electric or magnetic fields.
Deflection System
Deflection of the electron beam can be controlled by either electric fields or magnetic fields. In
case of magnetic field, two pairs of coils are used, one for horizontal deflection and other for
vertical deflection. In case of electric field, two pairs of parallel plates are used, one for
horizontal deflection and second for vertical deflection as shown in figure above.
CRT Screen
The inside of the large end of a CRT is coated with a fluorescent material that gives off light
when struck by electrons. When the electrons in the beam is collides with phosphor coating
screen, they stopped and their kinetic energy is absorbed by the phosphor. Then a part of beam
energy is converted into heat energy and the remainder part causes the electrons in the phospor
atom to move up to higher energy levels. After a short time the excited electrons come back to
their ground state. During this period, we see a glowing spot that quickly fades after all excited
electrons are returned to their ground state.
Persistence
It is defined as the time they continue to emit light after the CRT beam is removed. Persistence is
defined as the time it take the emitted light from the screen to decay to one-tenth of its original
intensity. Lower-persistence phosphors require higher refresh rates to maintain a picture on the
screen without flicker. A phosphor with low persistence is useful for animation ; a highpersistence phosphor is useful for displaying highly complex, static pictures. Although some
phosphor have a persistence greater than 1 second, graphics monitor are usually constructed with
a persistence in the range from 10 to 60 microseconds.
Resolution
The number of points per centimeter that can be used be plotted horizontlly and vertically. Or
Total number of points in each direction.
The resolation of a CRT is depend on
•
•
•
type of phosphor
intensity to be displayed
focusing and deflection system
Aspect Ratio
It is ratio of horozontal to vertical points.
Example: An aspect ratio of 3/4 means that a verticle line plotted with three points has same
lenght as horizontal line plotted with four points.
Raster Scan Systems
It is the most common type of graphics monitor based on television technology. In a raster scan
system, the electron beam is swept across the screen, one row at a time from top to bottom.
When electron beam moves across each row the beam intensity is turned ON and OFF to craete a
pattern of illuminated spots. Picture definition is stored in a memory called frame buffer which
holds the set of intensity values, which are then retrieved from the frame buffer and pointed on
the screen one row at a time as shown in figure below:
At the end of each line the beam must be turned off and redirect to the left hand side of the CRT,
this is called Horizontal Retrace. At the end of each frame, the electrin beam return to top left
corner of the screen to begin the next frame called Verticle Retrace as shown in figure below:
Advantages
•
produce realistic images
•
•
also produced different colors
and shadows scenes.
Disadvantages
•
•
•
low resolution
expensive
electron beam directed to whole screen
Random Scan Systems !
In Random Scan System, an electron beam is directed to only those parts of the screen where a
picture is to be drawn. The picture is drawn one line at a time, so also called vector displays or
stroke writing displays. After drawing the picture the system cycles back to the first line and
design all the lines of the picture 30 to 60 time each second.
Advantages
•
•
Produced smooth line drawings
High resolution
Disadvantages
•
•
Designed only for line drawing applications.
Can't display relastistic images.
Grey Shades
The intensity of a phosphor dot is proportional to the number of electrons colliding on it. If this
number is controlled electronically, the dots could be excited to different energy ststes. Then
they would emit different radint energy in their process of returning to the normal quantu level.
This phenomenon generates different intensity level or grey shedes as perceived by the viewer. If
the number of electrons impinging on phosphor dots is fixed to a particular value, the viewer
perceives only a single intensity level which considered as ON state of the pixel while others are
regarded as OFF state.
If the phosphor dots are excited at 64 or 256 different intensity levels, the monitors are grey
shades viedo monitors.
Color CRT Monitors
A color CRT monitor displays color picture by using a combination of phosphors that emit
different colored light. By combining the emitted light a range of colors can be generated. Two
basic methods for producing color displays are:
•
•
Beam Penetration Method
Shadow-Mask Method
Beam Penetration Method
Random scan monitors use the beam penetration method for displaying color picture. In this, the
inside of CRT screen is coated two layers of phorphor namely red and green. A beam of slow
electrons excites ony the outer red layer, while a beam of fast electrons penetrates red layer and
excites the inner green layer. At intermediate beam speeds, combination of red and green light
are emitted to show two addtional colors- orange and yellow.
Advantages
•
Less expensive
Disadvantages
•
•
Quality of images are not good as comparatable with other methods
Four colors are allowed only
Shadow Mask Method
Raster scan system are use shadow mask methods to produced a much more range of colors than
beam penetration method. In this, CRT has three phosphor color dots. One phosphor dot emits a
red light, second emits a green light and third emits a blue light. This type of CRT has three
electrons guns and a shadow mask grid as shown in figure below:
In this figure, three electrons beams are deflected and focused as a group onto the shadow mask
which contains a series of holes. When three beams pass through a hole in shadow mask they
activate dot triangle as shown in figure below:
The colors we can see depends on the amount of excitation of red, green and blue phosphor. A
white area is a reasult of all three dots with equal intensity while yellow is produced with green
and red dots and so on.
Advantages
•
•
•
produce realistic images
also produced different colors
and shadows scenes.
Disadvantages
•
•
•
low resolution
expensive
electron beam directed to whole screen
Full Color System
Color CRTs in graphics systems are designed as RGB monitors. These monitors use shadow
mask method and take the intensity level for each gun. A RGB color system with 34 bits of
storage per pixel is known as full color system or true color system.
Plasma Panels
The plasma panel is composed of two sheets of glass with a series of ribs ( like corrugated
cardboard ) filled with color phosphors in between. The top glass with embedded electrodes seals
and forms a pixel where the junctions of the channels and the plate come together. Inside the
sealed pixel, is a mixture of rare gases- typically argon and neon, although xenon has also been
used.
Actually a small electric capacitor has been created, with one electrode on the rear and a pair on
the front. These 3 electrodes control the capacitor charge, sustain and discharge functions
intrinsic to the plasma imaging process.
The plasma imaging cycle can be broken into following steps. initially, the pixel is at its resting (
ie. Off ) state. while a voltage is applied to the addressing electrodes ( pixel ). When the applied
voltage reaches a certain level- say 200+ volts - the resistance in the pixel is overcome, and an
electrical discharge is made across the electrodes. Once this discharge occurrs, the mixture of
rare gases is ionized into a plasma state, which means the gas mixture can now conduct
electricity, an intense burst of ultraviolet ( UV ) light is emitted. This burst of UV energy
stimulates the color phosphors, in turn makes them glow brightly.
Once the pixel is switched On, a much lower voltage sustains the UV emissions and keeps the
phosphors glowing. This sustain voltage is typically in the 50 volts range. Eventually, the pixel
will need to be turned off to rest the phosphors. This is done by removing the sustain voltage
first, then reversing the charge in the pixel through the addressing electrodes. At this point, the
pixel is back to its resting
Flat-Panel display
An electronic display in which a large orthogonal array of display elements, such as liquidcrystal or electroluminescent elements, form a flat screen. The term “flat-panel display” is
actually a misnomer, since thinness is the distinguishing characteristic. Most television sets and
computer monitors currently employ cathode-ray tubes. Cathode-ray tubes cannot be thin
because the light is generated by the process of cathodoluminescence whereby a high-energy
electron beam is scanned across a screen covered with an inorganic phosphor. The cathode-ray
tube must have moderate depth to allow the electron beam to be magnetically or electrostatically
scanned across the entire screen.
For a flat-panel display technology to be successful, it must at least match the basic performance
of a cathode-ray tube by having (1) full color, (2) full gray scale, (3) high efficiency and
brightness, (4) the ability to display full-motion video, (5) wide viewing angle, and (6) wide
range of operating conditions. Flat-panel displays should also provide the following benefits: (1)
thinness and light weight, (2) good linearity, (3) insensitivity to magnetic fields, and (4) no x-ray
generation. These four attributes are not possible in a cathode-ray tube.
Flat-panel displays can be divided into three types: transmissive, emissive, and reflective. A
transmissive display has a backlight, with the image being formed by a spatial light modulator. A
transmissive display is typically low in power efficiency; the user sees only a small fraction of
the light from the backlight. An emissive display generates light only at pixels that are turned on.
Emissive displays should be more efficient than transmissive displays, but due to low efficiency
in the light generation process most emissive and transmissive flat panel displays have
comparable efficiency. Reflective displays, which reflect ambient light, are most efficient. They
are particularly good where ambient light is very bright, such as direct sunlight. They do not
work well in low-light environments.
Most flat-panel displays are addressed as an X-Y matrix, the intersection of the row and column
defining an individual pixel (see illustration). Matrix addressing provides the potential for an alldigital display. Currently available flat-panel display devices range from 1.25-cm (0.5-in.)
diagonal displays used in head-mounted systems to 125-cm (50-in.) diagonal plasma displays.
LED
An LED display is a video display which uses light-emitting diodes. An LED panel is a small
display, or a component of a larger display. They are typically used outdoors in store signs and
billboards. LED panels are sometimes used as form of lighting, for the purpose of general
illumination, task lighting, or even stage lighting rather than display.
A light-emitting diode (LED) is an electronic light source. The LED was first invented in Russia
in the 1920s, and introduced in America as a practical electronic component in 1962. Oleg
Vladimirovich Losev was a radio technician who noticed that diodes used in radio receivers
emitted light when current was passed through them. In 1927, he published details in a Russian
journal of the first ever LED.
All early devices emitted low-intensity red light, but modern LEDs are available across the
visible, ultraviolet and infra red wavelengths, with very high brightness.
LEDs are based on the semiconductor diode. When the diode is forward biased (switched on),
electrons are able to recombine with holes and energy is released in the form of light. This effect
is called electroluminescence and the color of the light is determined by the energy gap of the
semiconductor. The LED is usually small in area (less than 1 mm2) with integrated optical
components to shape its radiation pattern and assist in reflection.
LEDs present many advantages over traditional light sources including lower energy
consumption, longer lifetime, improved robustness, smaller size and faster switching. However,
they are relatively expensive and require more precise current and heat management than
traditional light sources.
Applications of LEDs are diverse. They are used as low-energy indicators but also for
replacements for traditional light sources in general lighting and automotive lighting. The
compact size of LEDs has allowed new text and video displays and sensors to be developed,
while their high switching rates are useful in communications technology.
The first recorded flat panel LED television screen developed was by J. P. Mitchell in 1977 [1].
The modular, scalable display array was initially enabled by hundreds of MV50 LEDs and a
newly available TTL (transistor transistor logic) memory addressing circuit from National
Semiconductor[2]. The 1/4 inch thin flat panel prototype and the scientific paper were each
displayed.[3] at the 29th Engineering Exposition in Anaheim May 1978, organized by the
Science Service in Washington D.C. The LED TV display received awards and recognition from
NASA[4], General Motors Corporation[5], and faculty from area Universities[6]. The event was
open to technology and business representatives from the U.S. and overseas. The monochromatic
prototype remains operational. A LCD (liquid crystal display) matrix design was also cited as a
future flat panel TV possibility in the accompanying scientific paper as a future alternate
television display method using a similar array scanning design.
LCD Monitors
(Liquid Crystal Display) A display technology that uses rod-shaped molecules (liquid crystals)
that flow like liquid and bend light. Unenergized, the crystals direct light through two polarizing
filters, allowing a natural background color to show. When energized, they redirect the light to
be absorbed in one of the polarizers, causing the dark appearance of crossed polarizers to show.
The more the molecules are twisted, the better the contrast and viewing angle.
Because it takes less power to move molecules than to energize a light-emitting device, LCDs
replaced the light-emitting diodes (LEDs) in digital watches in the 1970s. LCDs were then
widely used for a myriad of monochrome displays and still are. In the 1990s, color LCD screens
caused sales of laptop computers to explode, and in 2003, more LCD monitors were sold for
desktop computers than CRTs.
The LCD was developed in 1963 at RCA's Sarnoff Research Center in Princeton, NJ
A subpixel of a color LCD
TYPES OF LCDs
Passive Display
Called "passive matrix" when used for computer screens and "passive display" when used for
small readouts, all the active electronics (transistors) are outside of the display screen. Passive
displays have improved immensely, but do not provide a wide viewing angle, and submarining is
generally noticeable. Following are the types of passive displays.
TN - Twisted Nematic - 90º twist and Low-cost displays for consumer products and instruments.
Black on gray/silver background.
STN - Supertwisted Nematic- 180-270º twist and Used extensively on earlier laptops for mono
and color displays. DSTN and FSTN provide improvements over straight STN (180º - green/blue
on yellow background; 270º - blue on white/blue background).
Dual Scan STN - Improves STN display by dividing the screen into two halves and scanning
each half simultaneously, doubling the number of lines refreshed. Not as sharp as active matrix.
Active Display (TFT) - Widely used for all LCD applications today (laptop and desktop
computers, TVs, etc.). Known as "active matrix" displays, a transistor is used to control each
subpixel on the screen. For example, a 1024x768 color screen requires 2,359,296 transistors; one
for each red, green and blue subpixel (dot). Active matrix provides a sharp, clear image with
good contrast and eliminates submarining. Fabrication costs were originally higher than passive
matrix, which caused both types to be used in the early days of laptop flat panels. Active matrix
displays use a 90º (TN) twist. Also called "thin film transistor LCD" (TFT LCD). See bad pixel.
Reflective Vs. Backlit-Reflective screens used in many consumer appliances and handheld
devices require external light and only work well in a bright room or with a desk lamp. Backlit
and sidelit screens have their own light source and work well in dim lighting. Note that the
meaning of "reflective" in this case differs from light reflecting off the front of the screen into the
viewer's eyes.
Input Devices
A piece of computer hardware that is used to enter and manipulate information on a computer.
Basic input devices include the
•
•
•
•
•
•
•
Keyboard
Mouse
Digitizer
Trackball
Touch Screens
Light Pens
Microphones
•
•
•
•
Bar code readers
Joysticks
Scanners
Voice Systems
Keyboard
The keyboard is the most common input device for entering numeric and alphabetic data in to a
computer system by pressing a set of keys which are mounted on the keyboard, which is
connected to computer system.
The keys on computer keyboards are often classified as follows:
Alphanumeric Keys - letters and numbers.
Punctuation Keys - comma, period, semicolon, and so on.
Special Keys - function keys, control keys, arrow keys, Caps Lock key, and so on.
Application:Used to enter Text string,Short cuts to many function.
In graphics:Used to provide screen coordinates,Menu selection,, Gaming Control.
Mouse
A mouse is a small device that a computer user pushes across a desk surface in order to
point to a place on a display screen and to select one or more actions to take from that
position.
A mouse consists of a metal or plastic housing or casing, a ball that sticks out of the
bottom of the casing and is rolled on a flat surface, one or more buttons on the top of the
casing, and a cable that connects the mouse to the computer.
Hand held box used to position the screen cursor.
Wheels or roller on the buttom are used to record the position of the screen.
Generally there are 2 or 3 buttons used for operations like recording of the cursor
positions or invoking of a function.
In order to increase the number of INPUT parameters,additional devices can be included.
The z-mouse is an example of this.
Z-MOUSE
It has 3 buttons a thumbwheel on the side, a trackball on the top and a standard mouse
ball underneath.
This provides six degrees of freedom to select the positions,rotations ETC.
Allows 3d viewing.
Application:Animation, Auto CAD.
Digitizer
A graphics tablet (or digitizing tablet, graphics pad, drawing tablet) is a computer input device
that allows one to hand-draw images and graphics, similar to the way one draws images with a
pencil and paper. These tablets may also be used to capture data or handwritten signatures.
A graphics tablet (also called pen pad or digitizer) consists of a flat surface upon which the user
may "draw" an image using an attached stylus, a pen-like drawing apparatus. The image
generally does not appear on the tablet itself but, rather, is displayed on the computer monitor.
Some tablets however, come as a functioning secondary computer screen that you can interact
with directly using the stylus.
Some tablets are intended as a general replacement for a mouse as the primary pointing and
navigation device for desktop computers.
Light Pen
A Light Pen is a pointing device shaped like a pen and is connected to a VDU. The tip of the
light pen contains a light-sensitive element which, when placed against the screen, detects the
light from the screen enabling the computer to identify the location of the pen on the screen.
Light pens have the advantage of 'drawing' directly onto the screen, but this can become
uncomfortable, and they are not as accurate as digitising tablets.
Touch Screen
A touchscreen is a display which can detect the presence and location of a touch within the
display area. The term generally refers to touch or contact to the display of the device by a finger
or hand. Touchscreens can also sense other passive objects, such as a stylus. However, if the
object sensed is active, as with a light pen, the term touchscreen is generally not applicable. The
ability to interact directly with a display typically indicates the presence of a touchscreen.
The touchscreen has two main attributes. First, it enables one to interact with what is displayed
directly on the screen, where it is displayed, rather than indirectly with a mouse or touchpad.
Secondly, it lets one do so without requiring any intermediate device, again, such as a stylus that
needs to be held in the hand. Such displays can be attached to computers or, as terminals, to
networks. They also play a prominent role in the design of digital appliances such as the personal
digital assistant (PDA), satellite navigation devices, mobile phones, and video games.
Image Scanners
A scanner is a device that optically scans images, printed text, handwriting, or an object, and
converts it to a digital image. Common examples found in offices are variations of the desktop
(or flatbed) scanner where the document is placed on a glass window for scanning. Hand-held
scanners, where the device is moved by hand, have evolved from text scanning "wands" to 3D
scanners used for industrial design, reverse engineering, test and measurement, orthotics, gaming
and other applications. Mechanically driven scanners that move the document are typically used
for large-format documents, where a flatbed design would be impractical.
Modern scanners typically use a charge-coupled device (CCD) or a Contact Image Sensor (CIS)
as the image sensor, whereas older drum scanners use a photomultiplier tube as the image sensor.
A rotary scanner, used for high-speed document scanning, is another type of drum scanner, using
a CCD array instead of a photomultiplier. Other types of scanners are planetary scanners, which
take photographs of books and documents, and 3D scanners, for producing three-dimensional
models of objects.
Another category of scanner is digital camera scanners, which are based on the concept of
reprographic cameras. Due to increasing resolution and new features such as anti-shake, digital
cameras have become an attractive alternative to regular scanners. While still having
disadvantages compared to traditional scanners (such as distortion, reflections, shadows, low
contrast), digital cameras offer advantages such as speed, portability, gentle digitizing of thick
documents without damaging the book spine. New scanning technologies are combining 3D
scanners with digital cameras to create full-color, photo-realistic 3D models of objects.
Voice Systems
voice input device A device in which speech is used to input data or system commands directly
into a system. Such equipment involves the use of speech recognition processes, and can replace
or supplement other input devices. Some voice input devices can recognize spoken words from a
predefined vocabulary, some have to be trained for a particular speaker.
Speech recognition (also known as automatic speech recognition or computer speech
recognition) converts spoken words to machine-readable input (for example, to key presses,
using the binary code for a string of character codes). The term "voice recognition" is sometimes
incorrectly used to refer to speech recognition, when actually referring to speaker recognition,
which attempts to identify the person speaking, as opposed to what is being said. Confusingly,
journalists and manufacturers of devices that use speech recognition for control commonly use
the term Voice Recognition when they mean Speech Recognition.
Joystick
A joystick is an input device consisting of a stick that pivots on a base and reports its angle or
direction to the device it is controlling. Joysticks are often used to control video games, and
usually have one or more push-buttons whose state can also be read by the computer. A popular
variation of the joystick used on modern video game consoles is the analog stick.
The joystick has been the principal flight control in the cockpit of many aircraft, particularly
military fast jets, where centre stick or side-stick location may be employed.
Joysticks are also used for controlling machines such as cranes, trucks, underwater unmanned
vehicles and zero turning radius lawn mowers. Miniature finger-operated joysticks have been
adopted as input devices for smaller electronic equipment such as mobile phone.
Trackball
A trackball is a pointing device consisting of a ball held by a socket containing sensors to detect
a rotation of the ball about two axes—like an upside-down mouse with an exposed protruding
ball. The user rolls the ball with the thumb, fingers, or the palm of the hand to move a cursor.
Large tracker balls are common on CAD workstations for easy precision. Before the advent of
the touchpad, small trackballs were common on portable computers, where there may be no desk
space on which to run a mouse. Some small thumbballs clip onto the side of the keyboard and
have integral buttons with the same function as mouse buttons. The trackball was invented by
Tom Cranston and Fred Longstaff as part of the Royal Canadian Navy's DATAR system in
1952, eleven years before the mouse was invented. This first trackball used a Canadian five-pin
bowling ball.
When mice still used a mechanical design (with slotted 'chopper' wheels interrupting a beam of
light to measure rotation), trackballs had the advantage of being in contact with the user's hand,
which is generally cleaner than the desk or mousepad and doesn't drag lint into the chopper
wheels. The late 1990s replacement of mouseballs by direct optical tracking put trackballs at a
disadvantage and forced them to retreat into niches where their distinctive merits remained more
important. Most trackballs now have direct optical tracking which follows dots on the ball.
As with modern mice, most trackballs now have an auxiliary device primarily intended for
scrolling. Some have a scroll wheel like most mice, but the most common type is a “scroll ring”
which is spun around the ball. Kensington's SlimBlade Trackball similarly tracks the ball itself in
three dimensions for scrolling.
Hard Copy Devices
Hard copy device or output devices accept data from a computer and converted them into a form
which is suitable for use by the user.
Basic output devices include the
•
•
•
Monitors
Printers
Plotters
Output Device - Printer
Printers are the most commonly used output devices for producing hard copy output.
The various types of printers in used today are
•
•
•
•
Dot-Matrix Printers
Inkjet Printers
Drum Printers
Laser Printers
Dot-Matrix Printers
A dot matrix printer or impact matrix printer is a type of computer printer with a print head that
runs back and forth, or in an up and down motion, on the page and prints by impact, striking an
ink-soaked cloth ribbon against the paper, much like a typewriter. Unlike a typewriter or daisy
wheel printer, letters are drawn out of a dot matrix, and thus, varied fonts and arbitrary graphics
can be produced. Because the printing involves mechanical pressure, these printers can create
carbon copies and carbonless copies.
Each dot is produced by a tiny metal rod, also called a "wire" or "pin", which is driven forward
by the power of a tiny electromagnet or solenoid, either directly or through small levers. Facing
the ribbon and the paper is a small guide plate (often made of an artificial jewel such as sapphire
or ruby) pierced with holes to serve as guides for the pins. The moving portion of the printer is
called the print head, and when running the printer as a generic text device generally prints one
line of text at a time. Most dot matrix printers have a single vertical line of dot-making
equipment on their print heads; others have a few interleaved rows in order to improve dot
density.
Inkjet Printers
Inkjet printers form characters and images by spraying small drops of ink on to the paper. They
are the most common type of computer printer for the general user due to their low cost, high
quality of output, capability of printing in different colors, and ease of use.
If you ever look at a piece of paper that has come out of an inkjet printer, you know that:
•
•
•
The dots are extremely small, so small that they are tinier than the diameter of a human
hair (70 microns).
The dots are positioned very precisely, with resolutions of up to 1440x720 dots per inch.
The dots can have different colors combined together to create photo-quality images.
Lasre Printers
A type of printer that utilizes a laser beam to produce an image on a drum. The light of the laser
alters the electrical charge on the drum wherever it hits. The drum is then rolled through a
reservoir of toner, which is picked up by the charged portions of the drum. Finally, the toner is
transferred to the paper through a combination of heat and pressure. This is also the way copy
machines work.
Because an entire page is transmitted to a drum before the toner is applied, laser printers are
sometimes called page printers. There are two other types of page printers that fall under the
category of laser printers even though they do not use lasers at all. One uses an array of LEDs to
expose the drum, and the other uses LCDs. Once the drum is charged, however, they both
operate like a real laser printer.
One of the chief characteristics of laser printers is their resolution -- how many dots per inch
(dpi) they lay down. The available resolutions range from 300 dpi at the low end to 1,200 dpi at
the high end. By comparison, offset printing usually prints at 1,200 or 2,400 dpi. Some laser
printers achieve higher resolutions with special techniques known generally as resolution
enhancement.
Laser printers produce very high-quality print and are capable of printing an almost unlimited
variety of fonts. Most laser printers come with a basic set of fonts, called internal or resident
fonts, but you can add additional fonts in one of two ways:
Font Cartridges : Laser printers have slots in which you can insert font cartridges, ROM boards
on which fonts have been recorded. The advantage of font cartridges is that they use none of the
printer's memory.
Soft Fonts : All laser printers come with a certain amount of RAM memory, and you can usually
increase the amount of memory by adding memory boards in the printer's expansion slots. You
can then copy fonts from a disk to the printer's RAM. This is called downloading fonts. A font
that has been downloaded is often referred to as a soft font, to distinguish it from the hard fonts
available on font cartridges. The more RAM a printer has, the more fonts that can be downloaded
at one time.
Laser printers are controlled through page description languages (PDLs). There are two de facto
standards for PDLs:
.
Plotters
A plotter is a printer that interprets commands from a computer to make line drawings on paper
with one or more automated pens. Unlike a regular printer, the plotter can draw continuous
point-to-point lines directly from vector graphics files or commands. There are a number of
different types of plotters: a drum plotter draws on paper wrapped around a drum which turns to
produce one direction of the plot, while the pens move to provide the other direction; a flatbed
plotter draws on paper placed on a flat surface; and an electrostatic plotter draws on negatively
charged paper with positively charged toner.Plotters were the first type of printer that could print
with color and render graphics and full-size engineering drawings. As a rule, plotters are much
more expensive than printers. They are most frequently used for CAE (computer-aided
engineering) applications, such as CAD (computer-aided design) and CAM (computer-aided
manufacturing).
A plotter consists of an arm that moves across the paper on which the diagram or graph needs to
be drawn . Apen moves along the arm. and the arm itself moves relative to the paper.
Acombination of the two thus provides movement along the horizontal and vertical axes.
In some plotters, the paper is held station ary while the arm and the pens move over it. This is
called a flat-bed plotter. In the other type of plotter, the paper is wrapped around a drum and
anchored at anchored at both ends. The drum rotates while the pen moves laterally along a fixed
rail. This is called a drum plotter.
To draw clear and high-qualit diagrams, a plotter needs high-quality pens with special inks of
different colors. A plotter can be connected to a PC through the parallel port. A plotter is more
software-dependent than any other peripheral, and needs much more instructions than printer for
producing output.
Plotter are used in applications like CAD, which requir high-quality graphics on paper. Many of
the plotters now available in the market are desktop models that can be used with PCs.
Businesses typically use plotters to present an alysis in terms (bar charts, graphs, diagrams, etc.)
as well as for engineering drawings.
Two commanly type plotters are:
•
•
Drum Plotters
Flatbed Plotters
Drum Plotters
In this plotter the pen moves in the horizontally and the drum rolls on the other axis. Generally it
is a graphical output device, and it is generally used to plot drawings. the width of the plot is
limited by the length of the drum. It is the first graphical output device produced to print large
scaled engineering drawings.Colored prints can be made if you use colored ink.This electronic
equipment used to plot large sized drawings on a tracing sheet. Plans for building ( architectural
drawing) , engineering drawings , dress models ETC are drawn using suitable packages. These
drawings are plotted on a tracing sheet, which can be used to produce a large no of blue prints.
It is very good for line drawings but it is very slow. Here the pen is held in gantry and it moves
in x-axis and the paper moves on y-axis.
Flatbed Plotters
This is a plotter where the paper is fixed on a flat surface and pens are moved to draw the image.
This plotter can use several different colour pens to draw with. The size of the plot is limited
only by the size of the plotter's bed.
Graphics Software
It is a any kind of software which can be used to creat,edit & manage 2D computer
graphics.These computer graphics may be clip art,web
graphics,logos,headings,backgrounds,digital photos or other kind of digital images.
3D modeling and CAD software is also graphics software .
Types:(1)Programming package(2)Application package
Graphics output Primitives
Point & Lines
Point plotting is accomplished by converting a single co-ordinate position furnished by an
application program into appropriate operations for the output device in use.
Line drawing is accomplished by calculating intermediate positins along the line path
between two specified endpoint positions. An output devices is then directed to fill in
these positions between the endpoints.
Line drawing algorithim can be divided into 3 types(1) Symmetric DDA
(2)Simple DDA
(3)Bresenham’s Line Algorithim.
Symmetric DDA
•
•
The Symmetric DDA generates line from their differential equation.
DDa works on the principle that given a starting point(x,y).x & y are incremented by
small step proportional to delta x & delta y until we get the end point in which case
DDA terminate.So in general given a start point & end point we can generate a line
by proceeding as follows.
X=x+some small quantity of delta x
Y=y+some small quantity of delta y
•
This world have the effect of truncating rather than rounding.So we initialize the DDA
with the value 0.5 in each of the fractional part to achieve true rounding.
In case of Symmetrical DDA
Some small quantity=2-n
Where 2
Simple DDA
Simple DDA - Choose a line length estimate that is equal to the max(deltax,deltay) in such a way
that e(deltax) or e(deltay) is of unit magnitude. This implies that one of the counters is simply an
increment by 1. That is, if e=1/(max(deltax,deltay)) then either e(deltax)=1 or e(deltay)=1.
Algorithm
procedure simpledda(x1,y1,x2,y2 : integer);
var
length, {maximum line length}
i:integer; {loop counter}
x,y, {x and y coordinate}
xincr,yincr: real; {Dx and Dy}
begin
length := abs(x2-x1);
if(abs(y2-y1)>length)then
length := abs(y2-y1);
xincr := (x2-x1)/length;
yincr := (y2-y1)/length;
x := x1;
y := y1;
for i := 0 to length do
begin
plot_pixel(round(x),round(y));
x := x + xincr;
y := y + yincr;
end;
end;
Bresenham Line Algorithm
The Bresenham line algorithm is an algorithm which determines which points in an ndimensional raster should be plotted in order to form a close approximation to a straight line
between two given points. It is commonly used to draw lines on a computer screen, as it uses
only integer addition, subtraction and bit shifting, all of which are very cheap operations in
standard computer architectures. It is one of the earliest algorithms developed in the field of
computer graphics. A minor extension to the original algorithm also deals with drawing circles.
While algorithms such as Wu's algorithm are also frequently used in modern computer graphics
because they can support antialiasing, the speed and simplicity of Bresenham's line algorithm
mean that it is still important. The algorithm is used in hardware such as plotters and in the
graphics chips of modern graphics cards. It can also be found in many software graphics
libraries. Because the algorithm is very simple, it is often implemented in either the firmware or
the hardware of modern graphics cards.
Algorithm
The common conventions that pixel coordinates increase in the down and right directions and
that pixel centers have integer coordinates will be used. The endpoints of the line are the pixels at
(x0, y0) and (x1, y1), where the first coordinate of the pair is the column and the second is the
row.
The algorithm will be initially presented only for the octant in which the segment goes down and
to the right (x0≤x1 and y0≤y1), and its horizontal projection x1 − x0 is longer than the vertical
projection y1 − y0 (in other words, the line has a slope less than 1 and greater than 0.) In this
octant, for each column x between x0 and x1, there is exactly one row y (computed by the
algorithm) containing a pixel of the line, while each row between y0 and y1 may contain
multiple rasterized pixels.
Bresenham's algorithm chooses the integer y corresponding to the pixel center that is closest to
the ideal (fractional) y for the same x; on successive columns y can remain the same or increase
by 1. The general equation of the line through the endpoints is given by:
Since we know the column, x, the pixel's row, y, is given by rounding this quantity to the nearest
integer:
The slope (y1 − y0) / (x1 − x0) depends on the endpoint coordinates only and can be precomputed,
and the ideal y for successive integer values of x can be computed starting from y0 and
repeatedly adding the slope.
In practice, the algorithm can track, instead of possibly large y values, a small error value
between −0.5 and 0.5: the vertical distance between the rounded and the exact y values for the
current x. Each time x is increased, the error is increased by the slope; if it exceeds 0.5, the
rasterization y is increased by 1 (the line continues on the next lower row of the raster) and the
error is decremented by 1.0.
In the following pseudocode sample plot(x,y) plots a point and abs returns absolute value:
function line(x0, x1, y0, y1)
int deltax := x1 - x0
int deltay := y1 - y0
real error := 0
real deltaerr := deltay / deltax // Assume deltax != 0 (line is not vertical),
int y := y0
for x from x0 to x1
plot(x,y)
error := error + deltaerr
if abs(error) ≥ 0.5 then
y := y + 1
error := error - 1.0
Circle Drawing Algorithm
The first thing we can notice to make our circle drawing algorithm more efficient is that circles
centred at (0, 0) have eight-way symmetry.
Procedure Circle_Points(x,y :Integer);
Begin
Plot(x,y);
Plot(y,x);
Plot(y,-x);
Plot(x,-y);
Plot(-x,-y);
Plot(-y,-x);
Plot(-y,x);
Plot(-x,y)
End;
The equation for a circle is:
where r is the radius of the circle
So, we can write a simple circle drawing algorithm by solving the equation for y at unit x
intervals using:
However, unsurprisingly this is not a brilliant solution!
Firstly, the resulting circle has large gaps where the slope approaches the vertical
Secondly, the calculations are not very efficient
The square (multiply) operations
The square root operation – try really hard to avoid these!
We need a more efficient, more accurate solution.
Algorithm
Begin {Circle}
x := r;
y := 0;
d := 1 - r;
Repeat
Circle_Points(x,y);
y := y + 1;
If d < 0 Then
d := d + 2*y + 1
Else Begin
x := x - 1;
d := d + 2*(y-x) + 1
End
Until x < y
End; {Circle}
Polar Coordinates
The polar coordinate system is a two-dimensional coordinate system in which each point on a
plane is determined by a distance from a fixed point and an angle from a fixed direction.
The fixed point (analogous to the origin of a Cartesian system) is called the pole, and the ray
from the pole with the fixed direction is the polar axis. The distance from the pole is called the
radial coordinate or radius, and the angle is the angular coordinate, polar angle.
Using polar coordinates
x = r . cosθ
y = r . sin θ
From 0 to Π/4 (45°), take small steps in θ and calculate x and y for each θ.
Bresenham's Circle Algorithm
The midpoint circle algorithm is an algorithm used to determine the points needed for drawing a
circle. The algorithm is a variant of Bresenham's line algorithm, and is thus sometimes known as
Bresenham's circle algorithm, although not actually invented by Bresenham.
In the mid-point circle algorithm we use eight-way symmetry so only ever calculate the points
for the top right eighth of a circle, and then use symmetry to get the rest of the points.
Assume that we have just plotted point (xk, yk). The next point is a
choice between (xk+1, yk) and (xk+1, yk-1). We would like to choose
the point that is nearest to the actual circle So how do we make this choice?
Let’s re-jig the equation of the circle slightly to give us:
fcirc(x,y) = x2 + y2 - r2
The equation evaluates as follows:
By evaluating this function at the midpoint between the candidate pixels we can make our
decision
Assuming we have just plotted the pixel at (xk,yk) so we need to choose between (xk+1,yk) and
(xk+1,yk-1).
Our decision variable can be defined as:
Pk = fcirc(xk+1, yk-1/2)
= (xk+1)2 + (yk - 1/2)2 - r2
If pk < 0 the midpoint is inside the circle and and the pixel at yk is closer to the circle. Otherwise
the midpoint is outside and yk-1 is closer.
To ensure things are as efficient as possible we can do all of our calculations incrementally First
consider:
or:
where yk+1 is either yk or yk-1 depending on the sign of pk.
The first decision variable is given as:
Then if pk < 0 then the next decision variable is given as:
If pk > 0 then the decision variable is:
Ellipse
Ellipse is the finite or bounded case of a conic section, the geometric shape that results from
cutting a circular conical or cylindrical surface with an oblique plane. It is also the locus of all
points of the plane whose distances to two fixed points add to the same constant. Ellipses also
arise as images of a circle or a sphere under parallel projection, and some cases of perspective
projection. Indeed, circles are special cases of ellipses. An ellipse is also the closed and bounded
case of an implicit curve of degree 2, and of a rational curve of degree 2. It is also the simplest
Lissajous figure, formed when the horizontal and vertical motions are sinusoids with the same
frequency.
The circle algorithm can be generalized to work for an ellipse but only four way symmetry can
be used.
Ellipse Properties
Given two fixed positions F1 and F2 the sum of the two distances from these points to any point
P on the ellipse (d1+d2) is constant.
Ellipse Equation
With the major and minor axes aligned with the coordinate axes the ellipse is said to be in
standard position:
One Approach
Take small steps in x from xc to xc+rx and calculate y:
Symmetry can be used
Only one quadrant needs to be calculated since symmetry gives the three other quadrants.
Filled Area Primitives
A picture component whose area is filled with some solid color or pattern is called fill area.
Fill regions are usually planar surfaces,mainly polygons.
All fill areas are to be displayed with a specified solid color.
Most library routines require that a fill area be specified as a polygon.
Graphics routines can more efficiently process polygons that other kinds of fill shapes because
polygon boundaries are described with linear equation.
Attributes of Graphics Primitives
Point,line,curve
Point attributes:-This is color and size.Color components are set with RGB values or an index
into a color table.For raster system point size is an integer multiple of the pixel size,so that a
large point is displayed as a square block of pixels.
Line attributes:-A st. line can be displayed with three basis attributes color,width & style.
Curve attributes:-are colors,widths,dot-dash patterns and available pen or brush options.
Fill area attributes
Most graphics packages limit fil areas to polygons,convex polggons.We can fill any specified
regions including circles,ellipse and other objects with curve boundaries.The basics fill area
primitives are fill style & color blended fill regions.
Fill methods for areas with irregular boundaries
(1)Boundary fill algorithim:If the boundary of some region is specified in a single color,we can
fill interior of this region,pixel by pixeluntil the boundary color is encountered.This is called
boundary fill algorithm which is employed in interactive painting packages,where interior points
are easily selected.
Algorithim
Void boundary fill4(intx,inty,int fill color,int border color)
{int interior color;
Get pixel(x,y,interior color);
If((interior color!=border color)&&(interior color!=fill color))
{
Set pixel(x,y);
Boundary fill4(x+1,y,fill color,border color);
Boundary fill4(x-1,y,fill color,border color);
Boundary fill4(x,y+1,fill color,border color);
Boundary fill4(x,y-1,fill color,border color);
}}
(2)Flood-fill algorithim: If we want to fill in an area that is not defined within a single color
boundary,we can paint such areas by replacing a specified interior color instead of searching a
particular boundary color.This fill procedure is called a flood fill algorithm.
Void flood fill4(intx,inty,int fillcolor,int interiorcolor)
{
Int color;
Get pixel(x,y,color);
If(color=interior color)
{setpixel(x,y);
flood fill4(x+1,y,fill color,interior color);
flood fill4(x-1,y,fillcolor,interior color);
flod fill4(x,y+1,fill color,interior color);
flood fill4(x,y_1,fill color,interior color);
}}
ANTIALIASING
It is a software technique for diminishing jaggies-stair case like lines that should be
smooth,Jaggies occur because the output device,the monitor or printer doesnot have a high
enough resolution to presents a smooth line.
Antialising is sometimes called over sampling.To improve image quality we use antialising.Scan
conversion is essential a systematic approach to mapping objects that are defined in continuous
space to their discrete approximation.
The various forms of distortion that results from this operation arc collectively referred to as the
alising effects of scan conversion.
The distoration of information due to low frequency sampling is called aliasing. We can
improve the appearance of displayed raster line by applying antialiasing method.
Staircase:-A common example of aliasing effects is the staircase or jagged appearance we see
when scan converting a primitive such as a line or a circle.
Unequal Brightness:-Another artifact that is less noticeable is the unequal brightness of lines of
different orientation.A slanted line appears than a vertical or horizontal line through all are
presented at the same intensity level.
In horizontal line the pixel are placed one unit apart,where as those on the diagonal line are
approximately 1.414 unit apart.This difference in density produces the perceived difference in
brightness.
Many alising artifacts when appear in a static image at a modrate resolution are often
tolerable and in many cases negligible.However,they can have a significant impact on our
viewing experience when left untreated in a series of images that animate moving objects.
Although increasing image resolution is a straight forward way to decrease the size of many
alising artifact:we pay a heavy price in terms of system resource and result are not always
satisfactory.Some anti-aliasing techniques are designed to treat a particular type of artifact:
(1)Pre filtering & post filtering:A prefiltering technique works on the true signal in the
continuous space to derive proper values for individual pixel.
A Post filtering technique take discrete sampling of the continuous signal & use the samples to
compute pixcel value.
(2) Area Sampling: Area sampling is a pre filtering technique in which we superimpose a pixel
grid pattern onto the continuous object definition.For each pixcel area that intersects the object,
we calculate the percentage & overlap by the object. This percentage determines the proportion
of the overall intensity value of the corresponding pixel that is due to the objects contribution.
Super sampling:- Super sampling is often regarded as a post-filtering technique since discrete
samples are first taken and then used to calculate pixel values.
On the other hand it can be viewed as an approximation to the area sampling method since we
are simply using a finite no. of values in each pixel area to approximate the accurate analytical
result.
Lowpass-filtering:-This is a post filtering techniques in which we reassign each pixel a new
value that is a weighted average of its original value and the original values of its neighbors.
Pixel phasing:- Pixel phasing is a hardware based anti-aliasing tech. The graphics system in this
case is capable of shifting individual pixels from their normal positions in the pixel grid by a
fraction of the unit distance between pixels.By moving pixels closer to the true line or other
contour,this technique is very effective in smoothing out the stair steps without reducing the
sharpness of the edges.
Transformations
Transformations are used to map from one space to another along the graphics pipeline. It is a
process of changing the position of the object or may be combination of these. We know that an
image of picture is drawn using the picture coordinates. This image of picture when displayed on
display devices, we need to convert these picture coordinates to the display devices coordinates.
This task is done by transformation. There are two types of transformations
•
•
Geometric Transformation
Coordinate Transformations
Geometric Transformation
In this, object itself is moved relative to a stationary coordinate system. Transformations in
geometry include: translation, reflection, rotation,scaling all of which leaves the shape and alters
the original shape.
Let us impose a coordinate system on a plane. An object in the plane can be considered as a set
of points. Every Object point P has coordinate (X,Y) and object is the sum of all its coordinate
points as shown in figure:
Coordinate Transformations
A coordinate transformation is a conversion from one system to another, to describe the same
space. In this, object is held stationary while the coordinate system is moved relative to the
object.
Suppose that we have two coordinate systems in the plane. The first system located at the origion
O and has cordinate axes XY while the second system located at the origion O' and has cordinate
axes X'Y'. Then each point in the plane has two coordinate descriptions:
1).(X,Y) w.r.t. XY coordinate system
2).(X',Y') w.r.t. X'Y' coordinate system
If we think of the second system X'Y' as arising from a transformation applied to the first system
XY, then we can say that a coordinate transformation has been applied as shown in figure:
Translation
Repositioning an object along a straight line path from one coordinate location to another.
Adding translation distances, tx and ty, to the original coordinate position.
x'= x + tx
y'= y + t y
Matrix Form:
Example
Rotation
Rotations are rigid body transformations that move objects without deformation.Everypoint on
an object is rotated through the same angle.. If the figure is turned in an anti-clockwise direction,
the rotation is considered positive, while a negative rotation turns the figure clockwise.
A rotation moves a point along a cirular path centered at the origin (the pivot). It is a simple
trigonometry problem to show that rotating P=[xy] counter-clockwise by θ (theta) radians
produces a new point P'=[x'y'] given by
x' = xcosθ - ysinθ
y' = ycosθ + ysinθ
For example, pretend P=[1 1] and θ = Π / 2. Then P'=[-1 1], which you should agree correctly
matches the description. we can express the rotation in matrix form
The Inverse of a Rotation
The inverse of a rotation by θ radians can be created by rotating by -θ radians, but this is not the
best way to view it. Consider the trigonometic identities
Scaling
Scaling alters the size of an object. Pretend you are given a point p=(xyz) which is an object
vertex, and let (sxsysz) be scale factors in xyz respectively. Then the point can be scaled to a new
point by the matrix
In particular:
To scale (enlarge or shrink) the size of an object, each object vertex is multiplied by the scale
matrix Sas shown above.
Inverse of a Scale
As long as we do not scale by zero, a scale can always be inverted (undone) by the matrix:
The product SS-1= S-1S=I, the 3 X 3 indentity matrix.
Mirror Reflection
Mirror Reflection produces a mirror image of an object.
In Matrix Form:
Shearing
Shear transformation distorts the shape of an object by shifting X or Y co-ordinate values by an
amount proportional to the co-ordinate distance from a shear reference line.
An x-direction shear relative to the x-axis & y-direction shear relative to the y-axis are
produced,with the transformation matrix
In Matrix Form:
x-direction shear of unit cube
y-direction shear of unit cube
Clipping
Clipping refers to the removal of part of a scene. Internal clipping removes parts of a picture
outside a given region; external clipping removes parts inside a region.
Clipping avoid drawing parts of primitives outside window.
•
•
•
•
•
Clipping refers to the removal of part of a scene.
Window defines part of scene being viewed.
Must draw geometric primitives only inside window.
Internal clipping removes parts of a picture outside a given region.
External clipping removes parts inside a region.
Types of Clipping
•
•
•
•
•
Points Clipping
Lines Clipping
Polygons Clipping
Text Clipping
Etc.
Before Clipping
After Clipping
Points Clipping
Clipping a point (x; y) is easy assuming that the clipping window is an axis aligned rectangle
defined by (xwmin; ywmin) and (xwmax; ywmax):
Keep point (x; y) if
xwmin ≤ x ≤ xwmax
ywmin ≤ y ≤ ywmax
otherwise clip the point.
Using normalized coordinates, we have (xwmin = -1; ywmin = -1) and (xwmax = 1; ywmax = 1).
Line Clipping
Line clipping is the process of removing lines or portions of lines outside of an area of interest.
Typically, any line or part thereof which is outside of the viewing area is removed.
1).Line is always drawn if both end points lie inside rectangle.
2).Line is not drawn If both end points lie left, right, above, or below the clipping rectangle.
3).If none of the above, the line has to be clipped.
Example
1).Find the part of a line inside the clip window
2).Find the part of a line inside the clip window
There are two common algorithms for line clipping:
•
•
Cohen-Sutherland Line Clipping
Midpoint Subdivison Algorithm
Cohen Sutherland Line Clipping
This algorithm divides a 2D space into 9 parts, of which only the middle part (viewport) is
visible. These 9 regions can be uniquely identified using a 4 bit code, often called an outcode.
The 9 regions are: top-left, top-center, top-right, center-left, center, center-right, bottom-left,
bottom-center, and bottom-right.
If the beginning coordinate and ending coordinate both fall inside the viewport then the line is
automatically drawn in its entirety. If both fall in the same region outside the viewport, it is
disregarded and not drawn. If a line's coordinates fall in different regions, the line is divided into
two, with a new coordinate in the middle. The algorithm is repeated for each section; one will be
drawn completely, and the other will need to be divided again, until the line is only one pixel
The algorithm includes, excludes or partially includes the line based on where the two endpoints
are:
1).Both endpoints are in the viewport (bitwise OR of endpoints == 0): trivial accept.
2).Both endpoints are on the same side of the rectangle, which is not visible (bitwise AND of
endpoints != 0): trivial reject.
3).Both endpoints are in different parts: In case of this non trivial situation the algorithm finds
one of the two points that are outside the viewport (there is at least one point outside). The
intersection of the outpoint and extended viewport border is then calculated (i.e. with the
parametric equation for the line) and this new point replaces the outpoint. The algorithm repeats
until a trivial accept or reject occurs.
An outcode is computed for each of the two points in the line. The first bit is set to 1 if the point
is above the viewport. The bits in the outcode represent: Top, Bottom, Right, Left. For example
the outcode 1010 represents a point that is top-right of the viewport. Note that the outcodes for
endpoints must be recalculated on each iteration after the clipping occurs.
**4-bit codes for end points in different regions**
Algorithm
For each line segment
(1) compute clip codes
(2) if both are 0 0 0 0
accept line segment
else if c1 & c2 != 0
discard line segment
else /* c1 & c2 = 0 */
clip against left
clip against right
clip against bottom
clip against top
if anything remains
accept clipped segment
Midpoint Subdivison Algorithm
This algorithm is based on the bisection method. According to this method, we calculate the mid
value of a line using the endpoints value and then we divide the line into two line segments. Each
segment in partially visible category is divided again into smaller segments and categorized. The
bisection and categorization process continuous untill all the segment are in visible or invisible
category.
The midpoint coordinates (Xm, Ym) of a line segment P1(X1Y1) p2(X2Y2) are given by:
Xm = X1 + X2 / 2
Ym = Y1 + Y2 / 2
Algorithm
1).If every endpoint is in visible area of clipping window, then process is complete else continue
next step.
2).If line is not visible then process is complete else continue next step.
3).If line is partially visible then divide line into smaller line segments using bisection method.
Then go to step 1.
4).The bisection and categorization process continuous untill all the segment are in visible or
invisible category.
Polygon Clipping
A polygon is generally stored as a collection of vertices. Any clipping algorithm takes one
collection, and outputs a new collection. A clipped polygon, after all, is also a polygon. Notice
that the clipped polygon often will have more vertices than the unclipped one, but it can also
have the same number, or less. If the unclipped polygon lies completely outside the clipping
boundary, the clipped polygon even has zero vertices.
Before Cliping
After Clipping
Open Polygon
Sutherland Hodgman Algorithm
Sutherland-Hodgman uses a divide-and-conquer strategy to attack the problem. First, it clips the
polygon against the right clipping boundary. The resulting, partly clipped polygon is then clipped
against the top boundary, and then the process is repeated for the two remaining boundaries. (Of
course, it also works in another order.) In a way, it is the most natural thing to do. If you had to
clip a paper polygon with a pair of scissors, you would probably proceed the same way.
Sutherland-Hodgman's divide-and-conquer strategy. To clip a polygon against a rectangular
boundary window, successively clip against each boundary.
Algorithm
1).Polygons can be clipped against each edge of the window one at a time. Windows/edge
intersections, if any, are easy to find since the X or Y coordinates are already known.
2).Vertices which are kept after clipping against one window edge are saved for clipping against
the remaining edges.
3).Note that the number of vertices usually changes and will often increases.
4).We are using the Divide and Conquer approach.
To clip against one boundary, the algorithm loops through all polygon vertices. At each step, it
considers two of them, called 'previous' and 'current.' First, it determines whether these vertices
are inside or outside the clipping boundary. This, of course, is simply a matter of comparing the
horizontal or vertical position to the boundary's position.
Then, it applies the following simple rules:
1).If 'previous' and 'current' are both inside: Output 'current.'
2).If 'previous' is inside, and 'current' is outside: Output the intersection point of the
corresponding edge and the clipping boundary.
3).If 'previous' and 'current' are both outside: Do nothing.
4).If 'previous' is outside, and 'current' is inside: Output the intersection point, and then output
'current.'
This way, you get a new polygon, clipped against one boundary, and ready to be clipped against
the next boundary. The method works, but has one disadvantage: To clip against all four
boundaries, you have to store the intermediate, partly clipped polygons. It's evident that this is a
costly operation.
The Sutherland-Hodgman polygon clipping algorithm clips polygons against convex clipping
windows. It does so by clipping the subject polygon against each clip edge producing
intermediate subject polygons. Although we have not done so, the Sutherland-Hodgman
algorithm easily extends to 3 dimensions. The Sutherland-Hodgman may produce connecting
lines that were not in the original polygon. When the subject polygon is concave (not convex)
these connecting lines may be undesirable artifacts.
Difference between Cohen-Sutherland algorithm & Sutherland-Hodgman's polygon-clipping
algorithm
The difference between polygon-clipping strategy for a polygon and the Cohen-Sutherland
algorithm for clipping a line: The polygon clipper clips against four edges in succession, whereas
the line clipper tests the outcode to see which edge is crossed, and clips only when necessary.
Window & Viewport
Window
The window on the screen. It is an infinite region where we can define what we want to display.
Viewport
Part of the display window where the contents of the clipping window is mapped to/drawn. A
viewport is a rectangular region in computer graphics which area displayed on screen.
Windowing/Viewing Transformation
Displaying an image of a picture involves mapping the coordinates of the points and lines that
form the picture into the appropriate coordinates on the device or workstation where the image is
to be displayed. This is done through the use of coordinate transformations known as viewing
transformations. To perform a viewing transformation we deal with window and viewport.
We use the several different coordinate systems as:
Word Coordinate System (WCS)
WCS describe the picture to be dislayed with coordinates.
Physical Device Coordinate System(PDCS)
PDCS is correspondos to a device where image of particular is to be displayed.
Normalized Device Coordinate System(NDCS)
NDCS is, in which display area of virtual display device is to unit (1X1) square whose lower left
corner is at origin of the coordinate system.
The Viewing Transformation is given by
V = W.N
where
N : Normalized Transformation maps Word Coordinate System to Normalized Device
Coordinate System.
W : Workstation Transformation maps Normalized Device Coordinate System to Physical
Device Coordinate System.
The viewing transformation involves scaling, so undesirable distortions may be introduced. For
example, Circle in the window may be displayed as ellisped and square as rectangles. To avoid
the distortion we use the concept of aspect ratio.
The aspect ratio of a window or viewport is:
a = Xmax - Xmin / Ymax - Ymin
PROJECTION TRANSFORMATION
Projection is defind as the technique of mapping any 3d object onto a 2d scene.There are 2 types
of projection(1)Parallel projection(2)Perspective projection.
Parallael projection:- A parallel projection is a projection in which co-ordinate positions of
object are transferred to the view plane along parallel lines. A parallel projection preserves
relative proportions of object. This is the method used in computer aided drafting & design to
produce scale drawing of 3d object.
Perspective projection:-It is a projection in wehich object position are transformed to projection
co-ordinate along lines that converge to a point behind the view plane. A perspective projection
doesn’t preserve relative properties of objects.Perspective views of a scene are more realistic
because distant objects in the projected display are reduced in size.
MODULE-2
Vidsible surface detection method
Back-Face Detection
In a solid object, there are surfaces which are facing the viewer (front faces) and there are
surfaces which are opposite to the viewer (back faces).
These back faces contribute to approximately half of the total number of surfaces. Since we
cannot see these surfaces anyway, to save processing time, we can remove them before the
clipping process with a simple test.Each surface has a normal vector. If this vector is pointing in
the direction of the center of projection,it is a front face and can be seen by the viewer. If it is
pointing away from the center of projection, it is a back face and cannot be seen by the viewer.
The test is very simple, if the z component of the normal vector is positive, then, it is a back face.
If the z component of the vector is negative, it is a front face. Note that this technique only caters
well for nonoverlapping convex polyhedra.For other cases where there are concave polyhedra or
overlapping objects, we still need to apply other methods to further determine where the
obscured faces are partially or completely hidden by other objects (eg.Using Depth-Buffer
Method or Depth-sort Method).
9.2 Depth-Buffer Method (Z-Buffer Method)
This approach compare surface depths at each pixel position on the projection plane. Object
depth is usually measured from the view plane along the z axis of a viewing system.
This method requires 2 buffers: one is the image buffer and the other is called the z-buffer (or the
depth buffer). Each of these buffers has the same resolution as the image to be
captured. As surfaces are processed, the image buffer is used to store the color values of each
pixel position and the z-buffer is used to store the depth values for each (x,y) position.
Algorithm:
1. Initially each pixel of the z-buffer is set to the maximum depth value (the depth of the back
clipping plane).
2. The image buffer is set to the background color.
3. Surfaces are rendered one at a time.
4. For the first surface, the depth value of each pixel is calculated.
5. If this depth value is smaller than the corresponding depth value in the z-buffer (ie. it is closer
to
the view point), both the depth value in the z-buffer and the color value in the image buffer are
replaced by the depth value and the color value of this surface calculated at the pixel position.
6. Repeat step 4 and 5 for the remaining surfaces.
7. After all the surfaces have been processed, each pixel of the image buffer represents the color
of a
visible surface at that pixel.
- This method requires an additional buffer (if compared with the Depth-Sort Method) and the
overheads involved in updating the buffer. So this method is less attractive in the cases where
only a few objects in the scene are to be rendered.
- Simple and does not require additional data structures.
- The z-value of a polygon can be calculated incrementally.
- No pre-sorting of polygons is needed.
- No object-object comparison is required.
- Can be applied to non-polygonal objects.
- Hardware implementations of the algorithm are available in some graphics workstation.
- For large images, the algorithm could be applied to, eg., the 4 quadrants of the image
separately,
so as to reduce the requirement of a large additional buffer.
Scan-Line Method
In this method, as each scan line is processed, all polygon surfaces intersecting that line are
examined
to determine which are visible. Across each scan line, depth calculations are made for each
overlapping surface to determine which is nearest to the view plane. When the visible surface has
been determined, the intensity value for that position is entered into the image buffer.
For each scan line do
Begin
For each pixel (x,y) along the scan line do ------------ Step 1
Begin
z_buffer(x,y) = 0
Image_buffer(x,y) = background_color
End
For each polygon in the scene do ----------- Step 2
Begin
For each pixel (x,y) along the scan line that is covered by the polygon do
Begin
2a. Compute the depth or z of the polygon at pixel location (x,y).
2b. If z < z_buffer(x,y) then
Set z_buffer(x,y) = z
Set Image_buffer(x,y) = polygon's colour
End
End
End
Depth-Sort Method
1. Sort all surfaces according to their distances from the view point.
2. Render the surfaces to the image buffer one at a time starting from the farthest surface.
3. Surfaces close to the view point will replace those which are far away.
4. After all surfaces have been processed, the image buffer stores the final image.
The basic idea of this method is simple. When there are only a few objects in the scene, this
method
can be very fast. However, as the number of objects increases, the sorting process can become
very complex and time consuming.
Illumination models:
Polygon-Rendering Methods
Gouraud Shading
Gouraud shading (Henri Gouraud, 1971) is used to achieve smooth lighting on lowpolygon surfaces using the Lambertian diffuse lighting model.
1. Calculates the surface normals for the polygons.
2. Normals are then averaged for all the polygons that meet at each vertex to
produce a vertex normal.
3. Lighting computations are then performed to produce intensities at vertices.
4. These intensities are interpolated along the edges of the polygons.
5. The polygon is filled by lines drawn across it that interpolate between the
previously calculated edge intensities.
Advantage - Gouraud shading is superior to flat shading which requires significantly less
processing than Gouraud, but gives low-polygon models a sharp, faceted look.
Phong Shading (Phong Interpolation Model)
•
•
An improved version of Gouraud shading that provides a better approximation to the
Phong shading model.
Main problem with Gouraud shading - when a specular highlight occurs near the
center of a large triangle, it will usually be missed entirely. Phong shading fixes the
problem.
1. Calculate the surface normals at the vertices of polygons in a 3D computer
model.
2. Normals are then averaged for all the polygons that meet at each vertex.
3. These normals are interpolated along the edges of the polygons.
4. Lighting computations are then performed to produce intensities at positions
along scanlines.
Module-3
Multimedia
Multimedia refers to content that uses a combination of different content forms. Multimedia
includes a combination of text, audio, still images, animation, video, or interactivity content
forms.
Multimedia is usually recorded and played, displayed, or accessed by information content
processing devices, such as computerized and electronic devices, but can also be part of a live
performance. Multimedia devices are electronic media devices used to store and experience
multimedia content. Multimedia is distinguished from mixed media in fine art; by including
audio, for example, it has a broader scope. The term "rich media" is synonymous for interactive
multimedia. Hypermedia can be considered one particular multimedia application.
Major characteristics of multimedia
Multimedia presentations may be viewed by person on stage, projected, transmitted, or played
locally with a media player. A broadcast may be a live or recorded multimedia presentation.
Broadcasts and recordings can be either analog or digital electronic media technology. Digital
online multimedia may be downloaded or streamed. Streaming multimedia may be live or ondemand.
Multimedia games and simulations may be used in a physical environment with special effects,
with multiple users in an online network, or locally with an offline computer, game system, or
simulator.
The various formats of technological or digital multimedia may be intended to enhance the users'
experience, for example to make it easier and faster to convey information. Or in entertainment
or art, to transcend everyday experience.
A lasershow is a live multimedia performance.
Application
Multimedia finds its application in various areas including, but not limited to, advertisements,
art, education, entertainment, engineering, medicine, mathematics, business, scientific research
and spatial temporal applications. Several examples are as follows:
Creative industries
Creative industries use multimedia for a variety of purposes ranging from fine arts, to
entertainment, to commercial art, to journalism, to media and software services provided for any
of the industries listed below. An individual multimedia designer may cover the spectrum
throughout their career. Request for their skills range from technical, to analytical, to creative.
Commercial uses
Much of the electronic old and new media used by commercial artists is multimedia. Exciting
presentations are used to grab and keep attention in advertising. Business to business, and
interoffice communications are often developed by creative services firms for advanced
multimedia presentations beyond simple slide shows to sell ideas or liven-up training.
Commercial multimedia developers may be hired to design for governmental services and
nonprofit services applications as well.
Entertainment and fine arts
In addition, multimedia is heavily used in the entertainment industry, especially to develop
special effects in movies and animations(VFX, 3D animation, etc.). Multimedia games are a
popular pastime and are software programs available either as CD-ROMs or online. Some video
games also use multimedia features. Multimedia applications that allow users to actively
participate instead of just sitting by as passive recipients of information are called Interactive
Multimedia. In the Arts there are multimedia artists, whose minds are able to blend techniques
using different media that in some way incorporates interaction with the viewer.
Education
In Education, multimedia is used to produce computer-based training courses (popularly called
CBTs) and reference books like encyclopedia and almanacs. A CBT lets the user go through a
series of presentations, text about a particular topic, and associated illustrations in various
information formats. Edutainment is the combination of education with entertainment, especially
multimedia entertainment.
The idea of media convergence is also becoming a major factor in education, particularly higher
education. Defined as separate technologies such as voice (and telephony features), data (and
productivity applications) and video that now share resources and interact with each other,
synergistically creating new efficiencies, media convergence is rapidly changing the curriculum
in universities all over the world. Likewise, it is changing the availability, or lack thereof, of jobs
requiring this savvy technological skill.
The English education in middle school in China is well invested and assisted with various
equipments. In contrast, the original objective has not been achieved at the desired effect. The
government, schools, families, and students spend a lot of time working on improving scores, but
hardly gain practical skills. English education today has gone into the vicious circle. Educators
need to consider how to perfect the education system to improve students’ practical ability of
English. Therefore an efficient way should be used to make the class vivid. Multimedia teaching
will bring students into a class where they can interact with the teacher and the subject.
Multimedia teaching is more intuitive than old ways; teachers can simulate situations in real life.
In many circumstances teachers do not have to be there, students will learn by themselves in the
class. More importantly, teachers will have more approaches to stimulating students’ passion of
learning.
Journalism
Newspaper companies all over are also trying to embrace the new phenomenon by implementing
its practices in their work. While some have been slow to come around, other major newspapers
like The New York Times, USA Today and The Washington Post are setting the precedent for the
positioning of the newspaper industry in a globalized world.
Multimedia reporters who are mobile (usually driving around a community with cameras, audio
and video recorders, and wifi-equipped laptop computers) are often referred to as Mojos, from
mobile journalist.
Engineering
Software engineers may use multimedia in Computer Simulations for anything from
entertainment to training such as military or industrial training. Multimedia for software
interfaces are often done as a collaboration between creative professionals and software
engineers.
Industry
In the Industrial sector, multimedia is used as a way to help present information to shareholders,
superiors and coworkers. Multimedia is also helpful for providing employee training, advertising
and selling products all over the world via virtually unlimited web-based technology.
Mathematical and scientific research
In mathematical and scientific research, multimedia is mainly used for modeling and simulation.
For example, a scientist can look at a molecular model of a particular substance and manipulate
it to arrive at a new substance. Representative research can be found in journals such as the
Journal of Multimedia.
Medicine
In Medicine, doctors can get trained by looking at a virtual surgery or they can simulate how the
human body is affected by diseases spread by viruses and bacteria and then develop techniques
to prevent it.
Document imaging
Document imaging is a technique that takes hard copy of an image/document and converts it into
a digital format (for example, scanners).
Disabilities
Ability Media allows those with disabilities to gain qualifications in the multimedia field so they
can pursue careers that give them access to a wide array of powerful communication forms.
Hypermedia and Multimedia
When someone turns on a computer, puts a CD (compact disc) in its CD drive, and listens to her
favorite music while she works on a paper, she is experiencing multimedia. Other examples of
multimedia usage include looking at pictures taken from a digital camera. In contrast, surfing the
World Wide Web, following links from one site to another, looking for all types of information,
is called experiencing hypermedia. The major difference between multimedia and hypermedia is
that the user is more actively involved in the hypermedia experience, whereas the multimedia
experience is more passive.
Hypermedia is an enhancement of hypertext, the non-sequential access of text documents, using
a multimedia environment and providing users the flexibility to select which document they want
to view next based on their current interests. The path followed to get from document to
document changes from user to user and is very dynamic. This "make your own adventure" type
of experience sets hypermedia apart.
Multimedia is defined as the integration of sound, animation, and digitized video with more
traditional types of data such as text. It is an application-oriented technology that is used in a
variety of ways, for example, to enhance presentations, and is based on the increasing capability
of computers to store, transmit, and present many types of information. Some examples of
multimedia applications are: business presentations, online newspapers, distance education, and
interactive gaming.
Hypermedia
Hypermedia tools focus on the interactive power of computers, which makes it easy for users to
explore a variety of paths through many information sources. As opposed to conventional
documents, such as books, that one normally reads one page after the other in the order set by the
author, hypermedia documents are very flexible and allow one to explore related documents in
any order and navigate through them in any direction.
The hypermedia model is fundamental to the structure of the World Wide Web, which is often
based on a relational database organization. In this model, documents are interconnected as in a
network, which facilitates extensive cross-referencing of related items. Users can browse
effectively through the data by following links connecting associated topics or keywords. Objectoriented and hypermedia models are becoming routine for managing very large multimedia
systems such as digital libraries.
Multimedia Components
Text
The most basic type of additional information is that which tells us what text or texts we are
looking at. A computer file name may give us a clue to what the file contains, but in many cases
filenames can only provide us with a tiny amount of information. Information about the nature of
the text can often consist of much more than a title and an author. These information fields
provide the document with a whole document header which can be used by retrieval programs to
search and sort on particular variables. For example, we might only be interested in looking at
texts in a corpus that were written by women, so we could ask a computer program to retrieve
texts where the author's gender variable is equal to "FEMALE".
When text is correctly structured and formatted, it can be the most flexible way to present
content. To make distributed online learning accessible, developers of learning platforms must
provide a means to render digital text in alternative formats.
Specifically, it should be possible to render text as:
1).Visual information. Text can be displayed on computer screens or other electronic devices
(e.g. personal digital assistants, cell phones, e-book readers).
2).Audio information. Text can be translated into speech using recordings or via synthesized
speech provided by a computer.
3).Tactile information. Text can be displayed on refreshable Braille displays or printed using a
Braille embosser.
Audio
Audio elements can add to the general appeal of online learning materials while making them
more accessible to those who are print-impaired learners, such as those with visual impairments
or dyslexia. However, developers should provide alternatives to ensure that learners who are deaf
or hard-of-hearing are not disadvantaged.
Images
Images can provide essential information. But without text support, images are not accessible for
users who are blind or have low-vision. Developers must provide users with a way to access
visual information. Providing text identification, or alternative text, will also benefit users of
text-only browsers, such as mobile phones. In addition to providing, developers should ensure
that images are scalable, so that users can enlarge them for better clarity.
Animations
Computer animation is the art of creating moving images via the use of computers. It is a
subfield of computer graphics and animation. Increasingly it is created by means of 3D computer
graphics, though 2D computer graphics are still widely used for stylistic, low bandwidth, and
faster real-time rendering needs. Sometimes the target of the animation is the computer itself, but
sometimes the target is another medium, such as film. It is also referred to as CGI (Computergenerated imagery or computer-generated imaging), especially when used in films.
To create the illusion of movement, an image is displayed on the computer screen then quickly
replaced by a new image that is similar to the previous image, but shifted slightly. This technique
is identical to the illusion of movement in television and motion pictures.
References Books
1. Donald Hearn & M. Pauline Baker, “Computer Graphics with OpenGL”, Third
Edition, 2004, Pearson Education, Inc. New Delhi.
2. Ze-Nian Li and Mark S. Drew, “Fundamentals of Multimedia”, First Edition, 2004, PHI
Learning Pvt. Ltd., New Delhi.
2.CS3162 Introduction to Computer Graphics Helena Wong, 2000
3.http://nptel.ac.in
4.http://www.w3professors.com