Wysyłka:
Niedostępna
Cena katalogowa 329,00 PLN brutto
Cena dostępna po zalogowaniu
Dodaj do Schowka
Zaloguj się
Przypomnij hasło
×
×
Cena 329,00 PLN
Dodaj do Schowka
Zaloguj się
Przypomnij hasło
×
×

Opis: Computer Graphics - Kurt Akeley, Steven K. Feiner, David F. Sklar

Computer Graphics: Principles and Practice, Third Edition, remains the most authoritative introduction to the field. The first edition, the original "Foley and van Dam," helped to define computer graphics and how it could be taught. The second edition became an even more comprehensive resource for practitioners and students alike. This third edition has been completely rewritten to provide detailed and up-to-date coverage of key concepts, algorithms, technologies, and applications. The authors explain the principles, as well as the mathematics, underlying computer graphics-knowledge that is essential for successful work both now and in the future. Early chapters show how to create 2D and 3D pictures right away, supporting experimentation. Later chapters, covering a broad range of topics, demonstrate more sophisticated approaches. Sections on current computer graphics practice show how to apply given principles in common situations, such as how to approximate an ideal solution on available hardware, or how to represent a data structure more efficiently. Topics are reinforced by exercises, programming problems, and hands-on projects. This revised edition features * New coverage of the rendering equation, GPU architecture considerations, and importance- sampling in physically based rendering * An emphasis on modern approaches, as in a new chapter on probability theory for use in Monte-Carlo rendering * Implementations of GPU shaders, software rendering, and graphics-intensive 3D interfaces * 3D real-time graphics platforms-their design goals and trade-offs-including new mobile and browser platforms * Programming and debugging approaches unique to graphics development The text and hundreds of figures are presented in full color throughout the book. Programs are written in C++, C#, WPF, or pseudocode-whichever language is most effective for a given example. Source code and figures from the book, testbed programs, and additional content will be available from the authors' website (cgpp.net) or the publisher's website (informit.com/title/9780321399526). Instructor resources will be available from the publisher. The wealth of information in this book makes it the essential resource for anyone working in or studying any aspect of computer graphics.Preface xxxv About the Authors xlv Chapter 1: Introduction 1 Graphics is a broad field; to understand it, you need information from perception, physics, mathematics, and engineering. Building a graphics application entails user-interface work, some amount of modeling (i.e., making a representation of a shape), and rendering (the making of pictures of shapes). Rendering is often done via a "pipeline" of operations; one can use this pipeline without understanding every detail to make many useful programs. But if we want to render things accurately, we need to start from a physical understanding of light. Knowing just a few properties of light prepares us to make a first approximate renderer. 1.1 An Introduction to Computer Graphics 1 1.2 A Brief History 7 1.3 An Illuminating Example 9 1.4 Goals, Resources, and Appropriate Abstractions 10 1.5 Some Numbers and Orders of Magnitude in Graphics 12 1.6 The Graphics Pipeline 14 1.7 Relationship of Graphics to Art, Design, and Perception 19 1.8 Basic Graphics Systems 20 1.9 Polygon Drawing As a Black Box 23 1.10 Interaction in Graphics Systems 23 1.11 Different Kinds of Graphics Applications 24 1.12 Different Kinds of Graphics Packages 25 1.13 Building Blocks for Realistic Rendering: A Brief Overview 26 1.14 Learning Computer Graphics 31 Chapter 2: Introduction to 2D Graphics Using WPF 35 A graphics platform acts as the intermediary between the application and the underlying graphics hardware, providing a layer of abstraction to shield the programmer from the details of driving the graphics processor. As CPUs and graphics peripherals have increased in speed and memory capabilities, the feature sets of graphics platforms have evolved to harness new hardware features and to shoulder more of the application development burden. After a brief overview of the evolution of 2D platforms, we explore a modern package (Windows Presentation Foundation), showing how to construct an animated 2D scene by creating and manipulating a simple hierarchical model. WPF's declarative XML-based syntax, and the basic techniques of scene specification, will carry over to the presentation of WPF's 3D support in Chapter 6. 2.1 Introduction 35 2.2 Overview of the 2D Graphics Pipeline 36 2.3 The Evolution of 2D Graphics Platforms 37 2.4 Specifying a 2D Scene Using WPF 41 2.5 Dynamics in 2D Graphics Using WPF 55 2.6 Supporting a Variety of Form Factors 58 2.7 Discussion and Further Reading 59 Chapter 3: An Ancient Renderer Made Modern 61 We describe a software implementation of an idea shown by Durer. Doing so lets us create a perspective rendering of a cube, and introduces the notions of transforming meshes by transforming vertices, clipping, and multiple coordinate systems. We also encounter the need for visible surface determination and for lighting computations. 3.1 A Durer Woodcut 61 3.2 Visibility 65 3.3 Implementation 65 3.4 The Program 72 3.5 Limitations 75 3.6 Discussion and Further Reading 76 3.7 Exercises 78 Chapter 4: A 2D Graphics Test Bed 81 We want you to rapidly test new ideas as you learn them. For most ideas in graphics, even 3D graphics, a simple 2D program suffices. We describe a test bed, a simple program that's easy to modify to experiment with new ideas, and show how it can be used to study corner cutting on polygons. A similar 3D program is available on the book's website. 4.1 Introduction 81 4.2 Details of the Test Bed 82 4.3 The C# Code 88 4.4 Animation 94 4.5 Interaction 95 4.6 An Application of the Test Bed 95 4.7 Discussion 98 4.8 Exercises 98 Chapter 5: An Introduction to Human Visual Perception 101 The human visual system is the ultimate "consumer" of most imagery produced by graphics. As such, it provides design constraints and goals for graphics systems. We introduce the visual system and some of its characteristics, and relate them to engineering decisions in graphics. The visual system is both tolerant of bad data (which is why the visual system can make sense of a child's stick-figure drawing), and at the same time remarkably sensitive. Understanding both aspects helps us better design graphics algorithms and systems. We discuss basic visual processing, constancy, and continuation, and how different kinds of visual cues help our brains form hypotheses about the world. We discuss primarily static perception of shape, leaving discussion of the perception of motion to Chapter 35, and of the perception of color to Chapter 28. 5.1 Introduction 101 5.2 The Visual System 103 5.3 The Eye 106 5.4 Constancy and Its Influences 110 5.5 Continuation 111 5.6 Shadows 112 5.7 Discussion and Further Reading 113 5.8 Exercises 115 Chapter 6: Introduction to Fixed-Function 3D Graphics and Hierarchical Modeling 117 The process of constructing a 3D scene to be rendered using the classic fixed-function graphics pipeline is composed of distinct steps such as specifying the geometry of components, applying surface materials to components, combining components to form complex objects, and placing lights and cameras. WPF provides an environment suitable for learning about and experimenting with this classic pipeline. We first present the essentials of 3D scene construction, and then further extend the discussion to introduce hierarchical modeling. 6.1 Introduction 117 6.2 Introducing Mesh and Lighting Specification 120 6.3 Curved-Surface Representation and Rendering 128 6.4 Surface Texture in WPF 130 6.5 The WPF Reflectance Model 133 6.6 Hierarchical Modeling Using a Scene Graph 138 6.7 Discussion 147 Chapter 7: Essential Mathematics and the Geometry of 2-Space and 3-Space 149 We review basic facts about equations of lines and planes, areas, convexity, and parameterization. We discuss inside-outside testing for points in polygons. We describe barycentric coordinates, and present the notational conventions that are used throughout the book, including the notation for functions. We present a graphics-centric view of vectors, and introduce the notion of covectors. 7.1 Introduction 149 7.2 Notation 150 7.3 Sets 150 7.4 Functions 151 7.5 Coordinates 153 7.6 Operations on Coordinates 153 7.7 Intersections of Lines 165 7.8 Intersections, More Generally 167 7.9 Triangles 171 7.10 Polygons 175 7.11 Discussion 182 7.12 Exercises 182 Chapter 8: A Simple Way to Describe Shape in 2D and 3D 187 The triangle mesh is a fundamental structure in graphics, widely used for representing shape. We describe 1D meshes (polylines) in 2D and generalize to 2D meshes in 3D. We discuss several representations for triangle meshes, simple operations on meshes such as computing the boundary, and determining whether a mesh is oriented. 8.1 Introduction 187 8.2 "Meshes" in 2D: Polylines 189 8.3 Meshes in 3D 192 8.4 Discussion and Further Reading 198 8.5 Exercises 198 Chapter 9: Functions on Meshes 201 A real-valued function defined at the vertices of a mesh can be extended linearly across each face by barycentric interpolation to define a function on the entire mesh. Such extensions are used in texture mapping, for instance. By considering what happens when a single vertex value is 1, and all others are 0, we see that all our piecewise-linear extensions are combinations of certain basic piecewise linear mesh functions; replacing these basis functions with other, smoother functions can lead to smoother interpolation of values. 9.1 Introduction 201 9.2 Code for Barycentric Interpolation 203 9.3 Limitations of Piecewise Linear Extension 210 9.4 Smoother Extensions 211 9.5 Functions Multiply Defined at Vertices 213 9.6 Application: Texture Mapping 214 9.7 Discussion 217 9.8 Exercises 217 Chapter 10: Transformations in Two Dimensions 221 Linear and affine transformations are the building blocks of graphics. They occur in modeling, in rendering, in animation, and in just about every other context imaginable. They are the natural tools for transforming objects represented as meshes, because they preserve the mesh structure perfectly. We introduce linear and affine transformations in the plane, because most of the interesting phenomena are present there, the exception being the behavior of rotations in three dimensions, which we discuss in Chapter 11. We also discuss the relationship of transformations to matrices, the use of homogeneous coordinates, the uses of hierarchies of transformations in modeling, and the idea of coordinate "frames." 10.1 Introduction 221 10.2 Five Examples 222 10.3 Important Facts about Transformations 224 10.4 Translation 233 10.5 Points and Vectors Again 234 10.6 Why Use 3 x 3 Matrices Instead of a Matrix and a Vector? 235 10.7 Windowing Transformations 236 10.8 Building 3D Transformations 237 10.9 Another Example of Building a 2D Transformation 238 10.10 Coordinate Frames 240 10.11 Application: Rendering from a Scene Graph 241 10.12 Transforming Vectors and Covectors 250 10.13 More General Transformations 254 10.14 Transformations versus Interpolation 259 10.15 Discussion and Further Reading 259 10.16 Exercises 260 Chapter 11: Transformations in Three Dimensions 263 Transformations in 3-space are analogous to those in the plane, except for rotations: In the plane, we can swap the order in which we perform two rotations about the origin without altering the result; in 3-space, we generally cannot. We discuss the group of rotations in 3-space, the use of quaternions to represent rotations, interpolating between quaternions, and a more general technique for interpolating among any sequence of transformations, provided they are "close enough" to one another. Some of these techniques are applied to user-interface designs in Chapter 21. 11.1 Introduction 263 11.2 Rotations 266 11.3 Comparing Representations 278 11.4 Rotations versus Rotation Specifications 279 11.5 Interpolating Matrix Transformations 280 11.6 Virtual Trackball and Arcball 280 11.7 Discussion and Further Reading 283 11.8 Exercises 284 Chapter 12: A 2D and 3D Transformation Library for Graphics 287 Because we represent so many things in graphics with arrays of three floating-point numbers (RGB colors, locations in 3-space, vectors in 3-space, covectors in 3-space, etc.) it's very easy to make conceptual mistakes in code, performing operations (like adding the coordinates of two points) that don't make sense.We present a sample mathematics library that you can use to avoid such problems. While such a library may have no place in high-performance graphics, where the overhead of type checking would be unreasonable, it can be very useful in the development of programs in their early stages. 12.1 Introduction 287 12.2 Points and Vectors 288 12.3 Transformations 288 12.4 Specification of Transformations. 290 12.5 Implementation 290 12.6 Three Dimensions 293 12.7 Associated Transformations 294 12.8 Other Structures 294 12.9 Other Approaches 295 12.10 Discussion 297 12.11 Exercises 297 Chapter 13: Camera Specifications and Transformations 299 To convert a model of a 3D scene to a 2D image seen from a particular point of view, we have to specify the view precisely. The rendering process turns out to be particularly simple if the camera is at the origin, looking along a coordinate axis, and if the field of view is 90 degrees in each direction. We therefore transform the general problem to the more specific one. We discuss how the virtual camera is specified, and how we transform any rendering problem to one in which the camera is in a standard position with standard characteristics. We also discuss the specification of parallel (as opposed to perspective) views. 13.1 Introduction 299 13.2 A 2D Example 300 13.3 Perspective Camera Specification 301 13.4 Building Transformations from a View Specification 303 13.5 Camera Transformations and the Rasterizing Renderer Pipeline 310 13.6 Perspective and z-values 313 13.7 Camera Transformations and the Modeling Hierarchy. 313 13.8 Orthographic Cameras 315 13.9 Discussion and Further Reading 317 13.10 Exercises 318 Chapter 14: Standard Approximations and Representations 321 The real world contains too much detail to simulate efficiently from first principles of physics and geometry. Models make graphics computationally tractable but introduce restrictions and errors. We explore some pervasive approximations and their limitations. In many cases, we have a choice between competing models with different properties. 14.1 Introduction 321 14.2 Evaluating Representations 322 14.3 Real Numbers 324 14.4 Building Blocks of Ray Optics 330 14.5 Large-Scale Object Geometry 337 14.6 Distant Objects 346 14.7 Volumetric Models 349 14.8 Scene Graphs 351 14.9 Material Models 353 14.10 Translucency and Blending 361 14.11 Luminaire Models 369 14.12 Discussion 384 14.13 Exercises 385 Chapter 15: Ray Casting and Rasterization 387 A 3D renderer identifies the surface that covers each pixel of an image, and then executes some shading routine to compute the value of the pixel. We introduce a set of coverage algorithms and some straw-man shading routines, and revisit the graphics pipeline abstraction. These are practical design points arising from general principles of geometry and processor architectures. For coverage, we derive the ray-casting and rasterization algorithms and then build the complete source code for a render on top of it. This requires graphics-specific debugging techniques such as visualizing intermediate results. Architecture-aware optimizations dramatically increase the performance of these programs, albeit by limiting abstraction. Alternatively, we can move abstractions above the pipeline to enable dedicated graphics hardware. APIs abstracting graphics processing units (GPUs) enable efficient rasterization implementations. We port our render to the programmable shading framework common to such APIs. 15.1 Introduction 387 15.2 High-Level Design Overview 388 15.3 Implementation Platform 393 15.4 A Ray-Casting Renderer 403 15.5 Intermezzo 417 15.6 Rasterization 418 15.7 Rendering with a Rasterization API 432 15.8 Performance and Optimization 444 15.9 Discussion 447 15.10 Exercises 449 Chapter 16: Survey of Real-Time 3D Graphics Platforms 451 There is great diversity in the feature sets and design goals among 3D graphics platforms. Some are thin layers that bring the application as close to the hardware as possible for optimum performance and control; others provide a thick layer of data structures for the storage and manipulation of complex scenes; and at the top of the power scale are the game-development environments that additionally provide advanced features like physics and joint/skin simulation. Platforms supporting games render with the highest possible speed to ensure interactivity, while those used by the special effects industry sacrifice speed for the utmost in image quality. We present a broad overview of modern 3D platforms with an emphasis on the design goals behind the variations. 16.1 Introduction 451 16.2 The Programmer's Model: OpenGL Compatibility (Fixed-Function) Profile 454 16.3 The Programmer's Model: OpenGL Programmable Pipeline 464 16.4 Architectures of Graphics Applications 466 16.5 3D on Other Platforms 478 16.6 Discussion 479 Chapter 17: Image Representation and Manipulation 481 Much of graphics produces images as output. We describe how images are stored, what information they can contain, and what they can represent, along with the importance of knowing the precise meaning of the pixels in an image file. We show how to composite images (i.e., blend, overlay, and otherwise merge them) using coverage maps, and how to simply represent images at multiple scales with MIP mapping. 17.1 Introduction 481 17.2 What Is an Image? 482 17.3 Image File Formats 483 17.4 Image Compositing 485 17.5 Other Image Types 490 17.6 MIP Maps 491 17.7 Discussion and Further Reading 492 17.8 Exercises 493 Chapter 18: Images and Signal Processing 495 The pattern of light arriving at a camera sensor can be thought of as a function defined on a 2D rectangle, the value at each point being the light energy density arriving there. The resultant image is an array of values, each one arrived at by some sort of averaging of the input function. The relationship between these two functions-one defined on a continuous 2D rectangle, the other defined on a rectangular grid of points-is a deep one. We study the relationship with the tools of Fourier analysis, which lets us understand what parts of the incoming signal can be accurately captured by the discrete signal. This understanding helps us avoid a wide range of image problems, including "jaggies" (ragged edges). It's also the basis for understanding other phenomena in graphics, such as moire patterns in textures. 18.1 Introduction 495 18.2 Historical Motivation 498 18.3 Convolution 500 18.4 Properties of Convolution 503 18.5 Convolution-like Computations 504 18.6 Reconstruction 505 18.7 Function Classes 505 18.8 Sampling 507 18.9 Mathematical Considerations 508 18.10 The Fourier Transform: Definitions 511 18.11 The Fourier Transform of a Function on an Interval 511 18.12 Generalizations to Larger Intervals and All of R 516 18.13 Examples of Fourier Transforms 516 18.14 An Approximation of Sampling 519 18.15 Examples Involving Limits 519 18.16 The Inverse Fourier Transform 520 18.17 Properties of the Fourier Transform 521 18.18 Applications 522 18.19 Reconstruction and Band Limiting 524 18.20 Aliasing Revisited 527 18.21 Discussion and Further Reading 529 18.22 Exercises 532 Chapter 19: Enlarging and Shrinking Images 533 We apply the ideas of the previous two chapters to a concrete example-enlarging and shrinking of images-to illustrate their use in practice. We see that when an image, conventionally represented, is shrunk, problems will arise unless certain high-frequency information is removed before the shrinking process. 19.1 Introduction 533 19.2 Enlarging an Image 534 19.3 Scaling Down an Image 537 19.4 Making the Algorithms Practical 538 19.5 Finite-Support Approximations 540 19.6 Other Image Operations and Efficiency 541 19.7 Discussion and Further Reading 544 19.8 Exercises 545 Chapter 20: Textures and Texture Mapping 547 Texturing, and its variants, add visual richness to models without introducing geometric complexity. We discuss basic texturing and its implementation in software, and some of its variants, like bump mapping and displacement mapping, and the use of 1D and 3D textures. We also discuss the creation of texture correspondences (assigning texture coordinates to points on a mesh) and of the texture images themselves, through techniques as varied as "painting the model" and probabilistic texture synthesis algorithms. 20.1 Introduction 547 20.2 Variations of Texturing 549 20.3 Building Tangent Vectors from a Parameterization 552 20.4 Codomains for Texture Maps 553 20.5 Assigning Texture Coordinates 555 20.6 Application Examples 557 20.7 Sampling, Aliasing, Filtering, and Reconstruction 557 20.8 Texture Synthesis 559 20.9 Data-Driven Texture Synthesis 562 20.10 Discussion and Further Reading 564 20.11 Exercises 565 Chapter 21: Interaction Techniques 567 Certain interaction techniques use a substantial amount of the mathematics of transformations, and therefore are more suitable for a book like ours than one that concentrates on the design of the interaction itself, and the human factors associated with that design. We illustrate these ideas with three 3D manipulators-the arcball, trackball, and Unicam-and with a a multitouch interface for manipulating images. 21.1 Introduction 567 21.2 User Interfaces and Computer Graphics 567 21.3 Multitouch Interaction for 2D Manipulation 574 21.4 Mouse-Based Object Manipulation in 3D 580 21.5 Mouse-Based Camera Manipulation: Unicam 584 21.6 Choosing the Best Interface 587 21.7 Some Interface Examples 588 21.8 Discussion and Further Reading 591 21.9 Exercises 593 Chapter 22: Splines and Subdivision Curves 595 Splines are, informally, curves that pass through or near a sequence of "control points." They're used to describe shapes, and to control the motion of objects in animations, among other things. Splines make sense not only in the plane, but also in 3-space and in 1-space, where they provide a means of interpolating a sequence of values with various degrees of continuity. Splines, as a modeling tool in graphics, have been in part supplanted by subdivision curves (which we saw in the form of corner cutting curves in Chapter 4) and subdivision surfaces. The two classes-splines and subdivision-are closely related. We demonstrate this for curves in this chapter; a similar approach works for surfaces. 22.1 Introduction 595 22.2 Basic Polynomial Curves 595 22.3 Fitting a Curve Segment between Two Curves: The Hermite Curve 595 22.4 Gluing Together Curves and the Catmull-Rom Spline 598 22.5 Cubic B-splines 602 22.6 Subdivision Curves 604 22.7 Discussion and Further Reading 605 22.8 Exercises 605 Chapter 23: Splines and Subdivision Surfaces 607 Spline surfaces and subdivision surfaces are natural generalizations of spline and subdivision curves. Surfaces are built from rectangular patches, and when these meet four at a vertex, the generalization is reasonably straightforward. At vertices where the degree is not four, certain challenges arise, and dealing with these "exceptional vertices" requires care. Just as in the case of curves, subdivision surfaces, away from exceptional vertices, turn out to be identical to spline surfaces. We discuss spline patches, Catmull-Clark subdivision, other subdivision approaches, and the problems of exceptional points. 23.1 Introduction 607 23.2 Bezier Patches 608 23.3 Catmull-Clark Subdivision Surfaces 610 23.4 Modeling with Subdivision Surfaces 613 23.5 Discussion and Further Reading 614 Chapter 24: Implicit Representations of Shape 615 Implicit curves are defined as the level set of some function on the plane; on a weather map, the isotherm lines constitute implicit curves. By choosing particular functions, we can make the shapes of these curves controllable. The same idea applies in space to define implicit surfaces. In each case, it's not too difficult to convert an implicit representation to a mesh representation that approximates the surface. But the implicit representation itself has many advantages. Finding a ray-shape intersection with an implicit surface reduces to root finding, for instance, and it's easy to combine implicit shapes with operators that result in new shapes without sharp corners. 24.1 Introduction 615 24.2 Implicit Curves 616 24.3 Implicit Surfaces 619 24.4 Representing Implicit Functions 621 24.5 Other Representations of Implicit Functions 624 24.6 Conversion to Polyhedral Meshes 625 24.7 Conversion from Polyhedral Meshes to Implicits 629 24.8 Texturing Implicit Models 629 24.9 Ray Tracing Implicit Surfaces 631 24.10 Implicit Shapes in Animation 631 24.11 Discussion and Further Reading 632 24.12 Exercises 633 Chapter 25: Meshes 635 Meshes are a dominant structure in today's graphics. They serve as approximations to smooth curves and surfaces, and much mathematics from the smooth category can be transferred to work with meshes. Certain special classes of meshes-height field meshes, and very regular meshes-support fast algorithms particularly well. We discuss level of detail in the context of meshes, where practical algorithms abound, but also in a larger context. We conclude with some applications. 25.1 Introduction 635 25.2 Mesh Topology 637 25.3 Mesh Geometry 643 25.4 Level of Detail 645 25.5 Mesh Applications 1: Marching Cubes, Mesh Repair, and Mesh Improvement 652 25.6 Mesh Applications 2: Deformation Transfer and Triangle-Order Optimization 660 25.7 Discussion and Further Reading 667 25.8 Exercises 668 Chapter 26: Light 669 We discuss the basic physics of light, starting from blackbody radiation, and the relevance of this physics to computer graphics. In particular, we discuss both the wave and particle descriptions of light, polarization effects, and diffraction. We then discuss the measurement of light, including the various units of measure, and the continuum assumption implicit in these measurements. We focus on the radiance, from which all other radiometric terms can be derived through integration, and which is constant along rays in empty space. Because of the dependence on integration, we discuss solid angles and integration over these. Because the radiance field in most scenes is too complex to express in simple algebraic terms, integrals of radiance are almost always computed stochastically, and so we introduce stochastic integration. Finally, we discuss reflectance and transmission, their measurement, and the challenges of computing integrals in which the integrands have substantial variation (like the specular and nonspecular parts of the reflection from a glossy surface). 26.1 Introduction 669 26.2 The Physics of Light 669 26.3 The Microscopic View 670 26.4 The Wave Nature of Light 674 26.5 Fresnel's Law and Polarization 681 26.6 Modeling Light as a Continuous Flow 683 26.7 Measuring Light 692 26.8 Other Measurements 700 26.9 The Derivative Approach 700 26.10 Reflectance 702 26.11 Discussion and Further Reading 707 26.12 Exercises 707 Chapter 27: Materials and Scattering 711 The appearance of an object made of some material is determined by the interaction of that material with the light in the scene. The interaction (for fairly homogeneous materials) is described by the reflection and transmission distribution functions, at least for at-the-surface scattering. We present several different models for these, ranging from the purely empirical to those incorporating various degrees of physical realism, and observe their limitations as well. We briefly discuss scattering from volumetric media like smoke and fog, and the kind of subsurface scattering that takes place in media like skin and milk. Anticipating our use of these material models in rendering, we also discuss the software interface a material model must support to be used effectively. 27.1 Introduction 711 27.2 Object-Level Scattering 711 27.3 Surface Scattering 712 27.4 Kinds of Scattering 714 27.5 Empirical and Phenomenological Models for Scattering 717 27.6 Measured Models 725 27.7 Physical Models for Specular and Diffuse Reflection 726 27.8 Physically Based Scattering Models 727 27.9 Representation Choices 734 27.10 Criteria for Evaluation 734 27.11 Variations across Surfaces 735 27.12 Suitability for Human Use 736 27.13 More Complex Scattering 737 27.14 Software Interface to Material Models 740 27.15 Discussion and Further Reading 741 27.16 Exercises 743 Chapter 28: Color 745 While color appears to be a physical property-that book is blue, that sun is yellow-it is, in fact, a perceptual phenomenon, one that's closely related to the spectral distribution of light, but by no means completely determined by it. We describe the perception of color and its relationship to the physiology of the eye. We introduce various systems for naming, representing, and selecting colors. We also discuss the perception of brightness, which is nonlinear as a function of light energy, and the consequences of this for the efficient representation of varying brightness levels, leading to the notion of gamma, an exponent used in compressing brightness data. We also discuss the gamuts (range of colors) of various devices, and the problems of color interpolation. 28.1 Introduction 745 28.2 Spectral Distribution of Light 746 28.3 The Phenomenon of Color Perception and the Physiology of the Eye 748 28.4 The Perception of Color 750 28.5 Color Description 756 28.6 Conventional Color Wisdom 758 28.7 Color Perception Strengths andWeaknesses 761 28.8 Standard Description of Colors 761 28.9 Perceptual Color Spaces 767 28.10 Intermezzo 768 28.11 White 769 28.12 Encoding of Intensity, Exponents, and Gamma Correction 769 28.13 Describing Color 771 28.14 CMY and CMYK Color 774 28.15 The YIQ Color Model 775 28.16 Video Standards 775 28.17 HSV and HLS 776 28.18 Interpolating Color 777 28.19 Using Color in Computer Graphics 779 28.20 Discussion and Further Reading 780 28.21 Exercises 780 Chapter 29: Light Transport 783 Using the formal descriptions of radiance and scattering, we derive the rendering equation, an integral equation characterizing the radiance field, given a description of the illumination, geometry, and materials in the scene. 29.1 Introduction 783 29.2 Light Transport 783 29.3 A Peek Ahead 787 29.4 The Rendering Equation for General Scattering 789 29.5 Scattering, Revisited 792 29.6 AWorked Example 793 29.7 Solving the Rendering Equation 796 29.8 The Classification of Light-Transport Paths 796 29.9 Discussion 799 29.10 Exercise 799 Chapter 30: Probability and Monte Carlo Integration 801 Probabilistic methods are at the heart of modern rendering techniques, especially methods for estimating integrals, because solving the rendering equation involves computing an integral that's impossible to evaluate exactly in any but the simplest scenes. We review basic discrete probability, generalize to continuum probability, and use this to derive the single-sample estimate for an integral and the importance-weighted single-sample estimate, which we'll use in the next two chapters. 30.1 Introduction 801 30.2 Numerical Integration 801 30.3 Random Variables and Randomized Algorithms 802 30.4 Continuum Probability, Continued 815 30.5 Importance Sampling and Integration 818 30.6 Mixed Probabilities 820 30.7 Discussion and Further Reading 821 30.8 Exercises 821 Chapter 31: Computing Solutions to the Rendering Equation: Theoretical Approaches 825 The rendering equation can be approximately solved by many methods, including ray tracing (an approximation to the series solution), radiosity (an approximation arising from a finite-element approach), Metropolis light transport, and photon mapping, not to mention basic polygonal renderers using direct-lighting-plus-ambient approximations. Each method has strengths and weaknesses that can be analyzed by considering the nature of the materials in the scene, by examining different classes of light paths from luminaires to detectors, and by uncovering various kinds of approximation errors implicit in the methods. 31.1 Introduction 825 31.2 Approximate Solutions of Equations 825 31.3 Method 1: Approximating the Equation 826 31.4 Method 2: Restricting the Domain 827 31.5 Method 3: Using Statistical Estimators 827 31.6 Method 4: Bisection 830 31.7 Other Approaches 831 31.8 The Rendering Equation, Revisited 831 31.9 What Do We Need to Compute? 836 31.10 The Discretization Approach: Radiosity 838 31.11 Separation of Transport Paths 844 31.12 Series Solution of the Rendering Equation 844 31.13 Alternative Formulations of Light Transport 846 31.14 Approximations of the Series Solution 847 31.15 Approximating Scattering: Spherical Harmonics 848 31.16 Introduction to Monte Carlo Approaches 851 31.17 Tracing Paths 855 31.18 Path Tracing and Markov Chains 856 31.19 Photon Mapping 872 31.20 Discussion and Further Reading 876 31.21 Exercises 879 Chapter 32: Rendering in Practice 881 We describe the implementation of a path tracer, which exhibits many of the complexities associated with ray-tracing-like renderers that attempt to estimate radiance by estimating integrals associated to the rendering equations, and a photon mapper, which quickly converges to a biased but consistent and plausible result. 32.1 Introduction 881 32.2 Representations 881 32.3 Surface Representations and Representing BSDFs Locally 882 32.4 Representation of Light 887 32.5 A Basic Path Tracer 889 32.6 Photon Mapping 904 32.7 Generalizations 914 32.8 Rendering and Debugging 915 32.9 Discussion and Further Reading 919 32.10 Exercises 923 Chapter 33: Shaders 927 On modern graphics cards, we can execute small (and not-so-small) programs that operate on model data to produce pictures. In the simplest form, these are vertex shaders and fragment shaders, the first of which can do processing based on the geometry of the scene (typically the vertex coordinates), and the second of which can process fragments, which correspond to pieces of polygons that will appear in a single pixel. To illustrate the more basic use of shaders we describe how to implement basic Phong shading, environment mapping, and a simple nonphotorealistic renderer. 33.1 Introduction 927 33.2 The Graphics Pipeline in Several Forms 927 33.3 Historical Development 929 33.4 A Simple Graphics Program with Shaders 932 33.5 A Phong Shader 937 33.6 Environment Mapping 939 33.7 Two Versions of Toon Shading 940 33.8 Basic XToon Shading 942 33.9 Discussion and Further Reading 943 33.10 Exercises 943 Chapter 34: Expressive Rendering 945 Expressive rendering is the name we give to renderings that do not aim for photorealism, but rather aim to produce imagery that communicates with the viewer, conveying what the creator finds important, and suppressing what's unimportant. We summarize the theoretical foundations of expressive rendering, particularly various kinds of abstraction, and discuss the relationship of the "message" of a rendering and its style. We illustrate with a few expressive rendering techniques. 34.1 Introduction 945 34.2 The Challenges of Expressive Rendering 949 34.3 Marks and Strokes 950 34.4 Perception and Salient Features 951 34.5 Geometric Curve Extraction 952 34.6 Abstraction 959 34.7 Discussion and Further Reading 961 Chapter 35: Motion 963 An animation is a sequence of rendered frames that gives the perception of smooth motion when displayed quickly. The algorithms to control the underlying 3D object motion generally interpolate between key poses using splines, or simulate the laws of physics by numerically integrating velocity and acceleration. Whereas rendering primarily is concerned with surfaces, animation algorithms require a model with additional properties like articulation and mass. Yet these models still simplify the real world, accepting limitations to achieve computational efficiency. The hardest problems in animation involve artificial intelligence for planning realistic character motion, which is beyond the scope of this chapter. 35.1 Introduction 963 35.2 Motivating Examples 966 35.3 Considerations for Rendering 975 35.4 Representations 987 35.5 Pose Interpolation 992 35.6 Dynamics 996 35.7 Remarks on Stability in Dynamics 1020 35.8 Discussion 1022 Chapter 36: Visibility Determination 1023 Efficient determination of the subset of a scene that affects the final image is critical to the performance of a renderer. The first approximation of this process is conservative determination of surfaces visible to the eye. This problem has been addressed by algorithms with radically different space, quality, and time bounds. The preferred algorithms vary over time with the cost and performance of hardware architectures. Because analogous problems arise in collision detection, selection, global illumination, and document layout, even visibility algorithms that are currently out of favor for primary rays may be preferred in other applications. 36.1 Introduction 1023 36.2 Ray Casting 1029 36.3 The Depth Buffer 1034 36.4 List-Priority Algorithms 1040 36.5 Frustum Culling and Clipping 1044 36.6 Backface Culling 1047 36.7 Hierarchical Occlusion Culling 1049 36.8 Sector-based Conservative Visibility 1050 36.9 Partial Coverage 1054 36.10 Discussion and Further Reading 1063 36.11 Exercise 1063 Chapter 37: Spatial Data Structures 1065 Spatial data structures like bounding volume hierarchies provide intersection queries and set operations on geometry embedded in a metric space. Intersection queries are necessary for light transport, interaction, and dynamics simulation. These structures are classic data structures like hash tables,trees, and graphs extended with the constraints of 3D geometry. 37.1 Introduction 1065 37.2 Programmatic Interfaces 1068 37.3 Characterizing Data Structures 1077 37.4 Overview of kd Structures 1080 37.5 List 1081 37.6 Trees 1083 37.7 Grid 1093 37.8 Discussion and Further Reading 1101 Chapter 38: Modern Graphics Hardware 1103 We describe the structure of modern graphics cards, their design, and some of the engineering tradeoffs that influence this design. 38.1 Introduction 1103 38.2 NVIDIA GeForce 9800 GTX 1105 38.3 Architecture and Implementation. 1107 38.4 Parallelism 1111 38.5 Programmability 1114 38.6 Texture, Memory, and Latency 1117 38.7 Locality 1127 38.8 Organizational Alternatives 1135 38.9 GPUs as Compute Engines 1142 38.10 Discussion and Further Reading 1143 38.11 Exercises 1143 List of Principles 1145 Bibliography 1149 Index 1183


Szczegóły: Computer Graphics - Kurt Akeley, Steven K. Feiner, David F. Sklar

Tytuł: Computer Graphics
Autor: Kurt Akeley, Steven K. Feiner, David F. Sklar
Producent: Addison Wesley Publishing Company
ISBN: 9780321399526
Rok produkcji: 2009
Ilość stron: 1264
Oprawa: Twarda
Waga: 2.27 kg


Recenzje: Computer Graphics - Kurt Akeley, Steven K. Feiner, David F. Sklar

Zaloguj się
Przypomnij hasło
×
×