About
199
Publications
10,877
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
4,475
Citations
Introduction
Skills and Expertise
Publications
Publications (199)
Given a set of points, it is easy to compute a polynomial that passes through the points. The Lagrange polynomial (LP) of Section 10.2 is an example of such a polynomial. However, as the discussion in Section 8.8 (especially Exercise 8.16) illustrates, a curve based on a high-degree polynomial may wiggle wildly and its shape may be far from what th...
This is the second volume of A Manual of Computer Graphics. This textbook/reference is big because the discipline of computer graphics is big. There are simply many topics, techniques, and algorithms to discuss, explain, and illustrate by examples. Because of the large number of pages, the book has been produced in two volumes. However, this divisi...
We are surrounded by objects of every size, shape, and color, but we don’t see all of them. Nearby objects tend to obscure parts of distant objects. The visibility problem, the problem of deciding which elements of a rendered scene are visible by a given observer, and which are hidden, has to be solved by any three-dimensional graphics program or s...
Webster defines “animate” as “to give life to; to make alive.” This is precisely what we feel when we watch a well-made piece of animation, and this is the reason why traditional animation has always been popular and why computer animation is such a successful field. In fact, computer graphics has developed over the years in three stages. The first...
The term perspective refers to several techniques that create the illusion of depth (three dimensions) on a two-dimensional surface. Linear perspective is one of these methods. It can be defined as a method for correctly placing objects in a painting or a drawing so they appear closer to the observer or farther away from him. The keyword in this de...
Rendering is a general term for methods that display a realistic-looking three-dimensional solid object on a two-dimensional output device (normally screen or paper). Perhaps the simplest way to render an object is to display its surface as a wireframe. The next step in rendering is to display, as a wireframe, only those parts of the surface that w...
The function plotted in Figure 22.1 is a complicated wave. It oscillates rapidly and goes up and down unpredictably. Yet we know from experience that if we magnify such a wave and examine small parts of it in detail, we may eventually find parts that are smooth and behave like the curves of most common functions such as square root, logarithm, sine...
In addition to the parallel and perspective projections, other projections may be developed that are useful for special applications or that create ornamental or artistic effects.
Real-life methods for constructing curves and surfaces often start with points and vectors, which is why we start with a short discussion of the properties of these mathematical entities. The material in this section applies to both two-dimensional and three-dimensional points and vectors, but the examples are given in two dimensions.
In a raster-scan graphics system, two steps are necessary in order to display a geometric figure: (1) a scan-converting algorithm should be executed to select the best pixels (the ones closest to the ideal figure) and (2) the selected pixels should be turned on.
The first electronic computers were built in the 1940s, during and after World War II, and were used mostly for cryptanalysis (code breaking) and to compute firing tables. These are numeric applications, which require CPU time but use little input or output. Already in the 1950s, after using computers for just a few years, computer designers and us...
The curve and surface methods of the preceding chapters are based on points. Using polynomials, it is easy to construct a parametric curve segment (or surface patch) that passes through a given one-dimensional array or two-dimensional grid of points.
Bézier methods for curves and surfaces are popular, are commonly used in practical work, and are described here in detail. Two approaches to the design of a Bézier curve are described, one using Bernstein polynomials and the other based on the mediation operator. Both rectangular and triangular Bézier surface patches are discussed, with examples.
The surfaces described in this chapter are obtained by transforming a curve. They are not generated as interpolations or approximations of points or vectors and are consequently different from the surfaces described in previous chapters.
In the last few decades, the digital computer has risen to become an integral part of our lives. It influences every aspect of our existence from commerce to kitchens and from entertainment to education. A large part of that influence is graphic
The projections discussed in this book transform a scene from three dimensions to two dimensions. Projections are needed because computer graphics is about designing and constructing three-dimensional scenes, but graphics output devices are two-dimensional.
The concept of a transform is familiar to mathematicians. A transform is a standard mathematical tool that is employed to solve problems in many areas. The idea is to transform a mathematical quantity (a number, a vector, a function, or anything else) to another form, where it may look unfamiliar but may have useful properties.
Given a set of points, it is possible to construct a polynomial that when plotted passes through the points. When fully computed and displayed, such a polynomial becomes a curve that’s referred to as a polynomial interpolation of the points.
The 1960s were the golden age of computer graphics. This was the time when many of its basic methods, algorithms, and techniques were developed, tested, and improved. Two of the most important concepts that were identified and studied in those years were transformations and projections.
The chief topics discussed in this chapter are the nature of light, the nature of color, color and human vision, various color spaces (or models), additive and subtractive colors, complementary colors, and the CIE diagram.
The Bézier curve can be constructed either as a weighted sum of control points or by the process of scaffolding. These are two very different approaches that lead to the same result.
“An image is worth a thousand words” is a well-known saying. It reflects the truth because the human eye–brain system is a high-capacity, sophisticated data processor. We can “digest” a huge amount of information if we receive it visually, as an image, rather than as a list of numbers. This is the reason for the success of computer graphics. Howeve...
The concept of a transform was introduced in Section 24.1 and the rest of Chapter 24 discusses orthogonal transforms. The transforms dealt with in this chapter are different and are referred to as subband transforms, because they partition an image into various bands or regions that contain different features of the image.
In order to achieve realism, the many algorithms and techniques employed in computer graphics have to construct mathematical models of curved surfaces, models that are based on curves.
Standards are useful in many aspects of everyday life. A notable example is the automobile. Early cars had controls whose placement and function were dictated by engineers, often without much thought of the driver. As car technology matured, a consensus slowly emerged, and virtually all current passenger cars are driven in the same way and have the...
The following appendixes are an integral part of this book. They contain material necessary for a full understanding of the topics discussed elsewhere in the book.
We start this chapter with codes for the integers. This is followed by many types of variable-length codes that are based
on diverse principles, have been developed by different approaches, and have various useful properties and features.
Sound recording and the movie camera were among the greatest inventions of ThomasEdison. They were later united when “talkies” were developed, and they are still used together in video recordings. This unification is one reason for the popularity of movies and video. With the rapid advances in computers in the 1980s and 1990s came multimedia applic...
Statistical data compression methods employ variable-length codes, with the shorter codes assigned to symbols or groups of symbols that appear more often in the data (have a higher probability of occurrence). Designers and implementors of variablelength codes have to deal with the two problems of (1) assigning codes that can be decoded unambiguousl...
Statistical compression methods use a statistical model of the data, which is why the quality of compression they achieve depends on how good that model is. Dictionarybased compression methods do not use a statistical model, nor do they use variablelength codes. Instead they select strings of symbols and encode each string as a token using a dictio...
A network vulnerability is an inherent weakness in the design, implementation, or use of a hardware component or a software
routine. A vulnerability invites attacks and makes the network susceptible to threats.
A threat is anything that can disrupt the operation of the network. A threat can even be accidental or an act of nature, but
threats are m...
In this age of computers, the Internet, and massive data bases that never lose or forget anything, it is no wonder that we
feel we are losing our privacy and we get very concerned about it. The reason for this loss can be found in the phrase “once
something is released into the Internet, it can never be completely deleted.” We give away bits and pi...
The history and main features of several computer viruses and worms are described in this chapter. More examples can be found
in Appendix C. Due to the prevalence of rogue software, there are many similar descriptions on the Internet. Notice that most
of the examples are from the 1980s and 1990s, because this was the time when new, original, and ve...
Spyware is the general name of an entire range of nasty software that monitors the users’ activities, collects information such as keystrokes, screen images, and file directories, and either saves this information or sends it to a remote location without the knowledge or consent of the computer owner.
Spyware has become one of the biggest headaches...
Identity theft is the crime of pretending to be someone else. The thief goes to the trouble of obtaining someone’s identity in order to gain financially from fraud, leaving the victim to sort out the resulting mess as best they can. Identity thieves use three main methods to obtain personal information:
A Trojan horse is a common type of rogue software. Such a program hides in a computer and has some malicious function. In contrast to viruses and worms, Trojans do not replicate. This chapter summarizes the main features of Trojans and also discusses how to modify a compiler in a devious way, to make it plant Trojans in programs that it compiles.
T...
Billy left home when he was in his teens and went to seek his fortune in Australia. When he returned home 30 years later as a mature, successful man, his relatives came to meet him at the dock in Southampton. He later remarked on this meeting to a friend “after not having seen my mother for 30 years, I have recognized her instantly among my many au...
What normally comes to mind, when hearing about or discussing computer security, is either viruses or some of the many security issues that have to do with networks, such as loss of privacy, identity theft, or how to secure sensitive data sent on a network. Computer security, however, is a vast discipline that also includes mundane topics such as h...
Computer viruses are the most familiar type of rogue software. A virus is a computer program that hides inside another program in a computer or on a disk drive, that attempts to propagate itself to other computers, and that often includes some destructive function (payload). This chapter discusses the main features of viruses and what makes them di...
The discussion of rogue software in the preceding chapters illustrates how dangerous this menace is. A worm can appear out of nowhere and infect all the computers of an organization within minutes. Once deeply embedded, it starts sending tentacles outside, looking for more computers to infect, and may also look inside for sensitive information to s...
Text does not occupy much space in the computer. An average book, consisting of a million characters, can be stored uncompressed
in about 1 Mbyte, because each character of text occupies one byte (the Colophon at the end of the book illustrates this with
data collected from the book itself).
"A wonderful treasure chest of information; spanning a wide range of data compression methods, from simple text compression methods to the use of wavelets in image compression. It is unusual for a text on compression to cover the field so completely." - ACM Computing Reviews "Salomon's book is the most complete and up-to-date reference on the subje...
The dates of most of the important historical events are known, but not always very precisely. We know that Kublai Khan, grandson
of Ghengis Khan, founded the Yuan dynasty in 1280 (it lasted until 1368), but we don’t know precisely (i.e., the month, day
and hour) when this act took place. A notable exception to this state of affairs is the modern a...
The many codes included in this chapter have a common feature; thet are robust. Any errors that creep into a string of such
codewords can either be detected (or even corrected automatically) or have only limited effects and do not propagate indefinitely.
The chapter starts with a discussion of the principles of error-control codes.
The first part of this chapter discusses digital images and general approaches to image compression. This is followed by a description of about 30 different compression methods. The author would like to start with the following observations: 1. Why were these particular methods included in the book, while others were ignored? The simple answer is t...
Data is compressed by reducing its redundancy, but this also makes the data less reliable, more prone to errors. Increasing the integrity of data, on the other hand, is done by adding check bits and parity bits, a process that increases the size of the data, thereby increasing redundancy. Data compression and data reliability are therefore opposite...
Previous chapters discuss the main classes of compression methods: RLE, statistical methods, and dictionary-based methods. There are data compression methods that are not easy to classify and do not clearly belong in any of the classes discussed so far. A few such methods are described here.
The text's goal is to provide a clear presentation of both the principles of data compression and all the important compression methods currently in use. The 1,092-page book consists of eight chapters and covers some of the following topics: basic compression techniques; statistical methods for compression such as Huffman and arithmetic coding; dic...
How can a given a data file be compressed? Compression amounts to eliminating the redundancy in the data, so the first step is to find the source of redundancy in each type of data. Once we understand what causes redundancy in a given type of data, we can propose an approach to eliminating the redundancy.
A digital image is a rectangular array of dots, or picture elements, arranged in m rows and n columns. The expression mxn is called the resolution of the image, and the dots are called pixels (except in the cases of fax images and video compression, where they are referred to as pels). The term “resolution” is often also used to indicate the number...
The Huffman algorithm is based on the probabilities of the individual data symbols. These probabilities become a statistical model of the data. As a result, the compression produced by this method depends on how good that model is. Dictionary-based compression methods are different. They do not use a statistical model of the data, nor do they emplo...
The discipline of data compression is vast. It is based on many approaches and techniques and it borrows many tools, ideas, and concepts from diverse scientific, engineering, and mathematical fields. The following are just a few examples:
Huffman coding is a popular method for compressing data with variable-length codes. Given a set of data symbols (an alphabet) and their frequencies of occurrence (or, equivalently, their probabilities), the method constructs a set of variable-length codewords with the shortest average length and assigns them to the symbols. Huffman coding serves as...
The Huffman algorithm is simple, efficient, and produces the best codes for the individual data symbols. The discussion in Chapter 2 however, shows that the only case where it produces ideal variable-length codes (codes whose average size equals the entropy) is when the symbols have probabilities of occurrence that are negative powers of 2 (i.e., n...
In the Introduction, it is mentioned that the electronic digital computer was originally conceived as a fast, reliable calculating machine. It did not take computer users long to realize that a computer can also store and process nonnumeric data. The term “multimedia,” which became popular in the 1990s, refers to the ability to digitize, store, and...
Most data compression methods that are based on variable-length codes employ the Huffman or Golomb codes. However, there are a large number of less-known codes that have useful properties - such as those containing certain bit patterns, or those that are robust - and these can be useful. This book brings this large set of codes to the attention of...
The discussion in this chapter starts with codes, prefix codes, and information theory concepts. This is followed by a description of basic codes such as variable-to-block codes, phased-in codes, and the celebrated Huffman code.
We start this chapter with codes for the integers. This is followed by many types of variable-length codes that are based on diverse principles, have been developed by different approaches, and have various useful properties and features.
The many codes included in this chapter have a common feature; thet are robust. Any errors that creep into a string of such codes either can be detected (or even corrected automatically) or have only limited effects and do not propagate indefinitely. The chapter starts with a discussion of the principles of error-control codes.
Sound recording and the movie camera were among the greatest inventions of Thomas Edison. They were later united when “talkies” were developed, and they are still used together in video recordings. This unification is one reason for the popularity of movies and video. With the rapid advances in computers in the 1980s and 1990s came multimedia appli...
Previous chapters discuss the main classes of compression methods: RLE, statistical methods, and dictionary-based methods. There are data compression methods that are not easy to classify and do not clearly belong in any of the classes discussed so far. A few such methods are described here.
The methods discussed so far have one common feature, they assign fixed-size codes to the symbols (characters or pixels) they operate on. In contrast, statistical methods use variable-size codes, with the shorter codes assigned to symbols or groups of symbols that appear more often in the data (have a higher probability of occurrence). Designers an...
Text does not occupy much space in the computer. An average book, consisting of a million characters, can be stored uncompressed in about 1 Mbyte, because each character of text occupies one byte (the Colophon at the end of the book illustrates this with accurate data from the book itself).
Statistical compression methods use a statistical model of the data, which is why the quality of compression they achieve depends on how good that model is. Dictionary-based compression methods do not use a statistical model, nor do they use variable-size codes. Instead they select strings of symbols and encode each string as a token using a dictio...
Data is compressed by reducing its redundancy, but this also makes the data less reliable, more prone to errors. Increasing the integrity of data, on the other hand, is done by adding check bits and parity bits, a process that increases the size of the data, thereby increasing redundancy. Data compression and data reliability are therefore opposite...
"A wonderful treasure chest of information; spanning a wide range of data compression methods, from simple test compression methods to the use of wavelets in image compression. It is unusual for a text on compression to cover the field so completely." – ACM Computing Reviews
"Salomon’s book is the most complete and up-to-date reference on the subje...
Transformations and projections are used extensively in Computer Graphics, a field which is now a part of everyone's lives via feature films, advertisements in the media, the screens of PDAs, mobile phones, and other vehicles and outlets. Transformations and Projections in Computer Graphics provides a thorough background in these two important topi...
All aspects of computer security-from the firewall for a home PC to the most daunting designs for large distributed systems-are becoming increasingly important worldwide. However, the complexities of securing computing systems can often make the topic too intimidating or onerous for people who are relative novices. Foundations of Computer Security...
B-spline methods for curves and surfaces were first proposed in the 1940s but were seriously developed only in the 1970s, by several researchers, most notably R. Riesenfeld. They have been studied extensively, have been considerably extended since the 1970s, and much is currently known about them. The designation „B“ stands for Basis, so the full n...
The Bézier curve can be constructed either as a weighted sum of control points or by the process of scaffolding. These are two very different approaches that lead to the same result. A third approach to curve and surface design, employing the process of refinement (also known as subdivision or corner cutting), is the topic of this chapter. Refineme...
The surfaces described in this chapter are obtained by transforming a curve. They are not generated as interpolations or approximations of points or vectors and are con- sequently different from the surfaces described in previous chapters. A reader who wishes a full understanding of this chapter should be familiar with the important three- dimensio...
Real life methods for constructing curves and surfaces often start with points and vectors, which is why we start with a short discussion of the properties of these mathematical enti- ties. The material in this section applies to both two-dimensional and three-dimensional points and vectors, while the examples are given in two-dimensions.
Definition: A polynomial of degree n in x is the function $$
P_n (x) = \sum\limits_{i = 0}^n {a_i x^i = a_0 + a_1 x + a_2 x^2 + \cdots + a_n x^n } ,
$$ where a i are the coefficients of the polynomial (in our case, they are real numbers). Note that there are n + 1 coefficients.
Bézier methods for curves and surfaces are popular, are commonly used in practical work, and are described here in detail. Two approaches to the design of a Bézier curve are described, one using Bernstein polynomials and the other using the mediation operator. Both rectangular and triangular Bézier surface patches are discussed, with examples.
The curve and surface methods of the preceding chapters are based on points. Using polynomials, it is easy to construct a parametric curve segment (or surface patch) that passes through a given one-dimensional array or two-dimensional grid of points.
Given a set of points, it is easy to compute a polynomial that passes through the points. The LP of Section 3.2 is an example of such a polynomial. However, as the discussion in Section 1.5 (especially exercise 1.20) illustrates, a curve based on a high-degree poly- nomial may wiggle wildly and its shape may be far from what the user has in mind. I...