Continuing to very Powerful Rendering Model Java2D

Download Report

Transcript Continuing to very Powerful Rendering Model Java2D

Continuing to very Powerful Rendering Model Java2D

The Shape Interface

class java.awt.geom.GeneralPath

Represents a geometric path constructed from several line segments , quadratic, cubic curves.

  append (Shape s, boolean connect)

method

 provides a way to append one shape to another by optionally connecting their paths with a line segment. GeneralPath

maintains a

current

coordinate at all times

 adding a line segment is at the beginning of the added line segment.  use its lineTo() method passing it two floats representing the destination coordinate.  use its moveTo() to the path.

and quadTo() methods to add a point or curve

Affine Transform class java.awt.geom.AffineTransform

    Encapsulates a general form affine transformation between two coordinate systems. This transformation is a coordinate transformation  represented by a 3x3 matrix with an implied last row ([0 0 1]) mapping each x and y in the bounding rectangle of a Shape to a new x' and y' according to the following: The m x' = m 00 x + m 01 y + m 02 y' = m 10 x + m 11 y + m 12 xx ’s represent the first two rows of a 3x3 matrix. x' = (scaleX * x) + (shearX * y) + offsetX y' = (scaleY * x) + (shearY * y) + offsetY These transformations preserve lines and parallelism We use them to perform scaling, shearing, and translation.

Affine Transform class java.awt.geom.AffineTransform (cont’d)

 To construct an AffineTransform of the following constructor: we use either the double or float version AffineTransform(m00, m10, m01, m11, m02, m12) •   Note the order of the parameters. This directly corresponds to the columns of our matrix described above.  Rotation also preserves parallelism. Given an angle of rotation in radians,  : x' = x*(cos  ) + y*(-sin  ) + offsetX  y' = x*(sin  ) + y*(cos  ) + offsetY Note that (degrees *  /180) = radians.

• The Java2D graphics context maintains a transform attribute, just as it maintains a color and font attribute. • Whenever we draw or fill a shape, this operation will be performed according to the current state of the transform attribute.

We can create an instance of AffineTransform by specifying the first two rows of the matrix as described above.

Affine Transform class java.awt.geom.AffineTransform (cont’d)

Alternatively,static methods can be used to create specific types of transformations:

 getRotateInstance(),  getScaleInstance(),  getShearInstance(),  getTranslateInstance(). 

The

concatenate()

method can be used to concatenate multiple transformations successively.

We can also compose specific transformations with an existing

AffineTransform

using its

 rotate(), scale(), shear(), and translate() methods.

Create and Build up AffineTransform Instances AffineTransform a = new AffineTransform();

no argument constructor creates an identity transform

a.translate(x, y); a.rotate(theta); a.rotate(theta, x, y); a.scale(x, y); a.shear(x, y);

Create and Build up AffineTransform Instances (cont’d )

   A single AffineTransform instance can be built up to represent a whole sequence of transformations.

Getting an Instance AffineTransform a; a = AffineTransform.getTranslateInstance(x, y); a = AffineTransform.getRotate(theta); a = AffineTransform.getRotate(theta, x, y); a = AffineTransform.getScale(x, y); a = AffineTransform.getShear(x, y); Each of these creates an AffineTransform representing the appropriate transformation

The concatenate method

Via concatenate AffineTransform t,r; t = AffineTransform.getTranslateInstance( 10.0, 0.0); r = AffineTransform.getRotateInstance( 45.0 * Math.PI / 180.0); t.concatenate(r); //

this modifies t

return t; OR Via Transformation Methods AffineTransform a; a = new AfineTransform(); a.translate(10.0, 0.0); a.rotate(45.0 * Math.PI / 180.0); return a;

An Example A demonstration of various image processing filters

import java.awt.*; import java.awt.geom.*; import java.awt.image.*; import java.awt.color.*; public class ImageOps implements GraphicsExample { static final int WIDTH = 600, HEIGHT = 675; // Size of our example public String getName() { return "Image Processing"; } // From GraphicsExample public int getWidth() { return WIDTH; } // From GraphicsExample public int getHeight() { return HEIGHT; } // From GraphicsExample Image image; /** This constructor loads the image we will manipulate */ public ImageOps() { java.net.URL imageurl = this.getClass(). getResource ("cover.gif“) ; image = new javax.swing. ImageIcon(imageurl).getImage(); } // These arrays of bytes are used by the LookupImageOp image filters below static byte[] brightenTable = new byte[256]; static byte[] thresholdTable = new byte[256];

static { // Initialize the arrays for(int i = 0; i < 256; i++) { brightenTable[i] = (byte)(Math.sqrt(i/255.0)*255); thresholdTable[i] = (byte)((i < 225)?0:i); } } // This AffineTransform is used by one of the image filters below static AffineTransform mirrorTransform; static { // Create and initialize the AffineTransform mirrorTransform = AffineTransform.getTranslateInstance(127, 0); mirrorTransform.scale(-1.0, 1.0); // flip horizontally } // These are the labels we'll display for each of the filtered images static String[] filterNames = new String[] { "Original", "Gray Scale", "Negative", "Brighten (linear)", "Brighten (sqrt)", "Threshold", "Blur", "Sharpen", "Edge Detect", "Mirror", "Rotate (center)", "Rotate (lower left)" }; // The following BufferedImageOp image filter objects perform // different types of image processing operations.

static BufferedImageOp[] filters = new BufferedImageOp[] { // 1) No filter here. We'll display the original image // 2) Convert to Grayscale color space null, new ColorConvertOp ColorSpace.getInstance(ColorSpace.CS_GRAY), null), // 3) Image negative. Multiply each color value by -1.0 and add 255 new RescaleOp(-1.0f, 255f, null), // 4) Brighten using a linear formula that increases all color values // 5) Brighten using the lookup table defined above new LookupOp(new ByteLookupTable(0, brightenTable), null), // 6) Threshold using the lookup table defined above new LookupOp(new ByteLookupTable(0, thresholdTable), null), new RescaleOp(1.25f, 0, null), // 7) Blur by "convolving" the image with a matrix new ConvolveOp(new Kernel(3, 3, new float[] { // 8) Sharpen by using a different matrix .1111f,.1111f,.1111f, .1111f,.1111f,.1111f, .1111f,.1111f,.1111f, } ) ), new ConvolveOp(new Kernel(3, 3, new float[] { // 9) Edge detect using yet another matrix 0.0f, -0.75f, 0.0f, -0.75f, 4.0f, -0.75f, 0.0f, -0.75f, 0.0f } ) ) , new ConvolveOp(new Kernel(3, 3, new float[] { 0.0f, -0.75f, 0.0f, -0.75f, 3.0f, -0.75f, 0.0f, -0.75f, 0.0f } ) ), // 10) Compute a mirror image using the transform defined above new AffineTransformOp (mirrorTransform,AffineTransformOp.TYPE_BILINEAR), // 11) Rotate the image 180 degrees about its center point new AffineTransformOp(AffineTransform.getRotateInstance(Math.PI,64,95), AffineTransformOp.TYPE_NEAREST_NEIGHBOR), // 12) Rotate the image 15 degrees about the bottom left new AffineTransformOp(AffineTransform.getRotateInstance(Math.PI/12, 0, 190), AffineTransformOp.TYPE_NEAREST_NEIGHBOR), };

// Draw the example public void draw(Graphics2D g, Component c) { // Create a BufferedImage big enough to hold the Image loaded in the constructor. //Then copy that image into the new BufferedImage object so that we can process it.

BufferedImage bimage = new BufferedImage (image.getWidth(c), image.getHeight(c), BufferedImage.TYPE_INT_RGB); Graphics2D ig = bimage .createGraphics(); ig .drawImage(image, 0, 0, c); // copy the image // Set some default graphics attributes g.setFont(new Font("SansSerif", Font.BOLD, 12)); // 12pt bold text g.setColor(Color.green); // Draw in green g.translate(10, 10); // Set some margins // Loop through the filters for (int i = 0; i < filters.length; i++) { // If the filter is null, draw the original image, otherwise, // draw the image as processed by the filter if (filters[i] == null) g.drawImage(bimage, 0, 0, c); else g.drawImage (filters[i]. filter(bimage, null), 0, 0, c); g.drawString (filterNames[i], 0, 205); // Label the image g.translate(137, 0); // Move over if (i % 4 == 3) g.translate(-137*4, 215); // Move down after 4 } } }

java.awt.image Interface BufferedImageOp

     public interface

BufferedImageOp

describes single-input/single-output operations performed on BufferedImage objects.

It is implemented by AffineTransformOp, ConvolveOp, ColorConvertOp, RescaleOp, and LookupOp. These objects can be passed into a BufferedImageFilter to operate on a BufferedImage in the ImageProducer-ImageFilter ImageConsumer paradigm. Classes that implement this interface must specify whether or not they allow in-place filtering-- filter operations where the source object is equal to the destination object. This interface cannot be used to describe more sophisticated operations such as those that take multiple sources. Note that this restriction also means that the values of the destination pixels prior to the operation are not used as input to the filter operation.

The ImageOps Module

The

ImageOps

ready-made

module contains a number of image processing operations.

This module is somewhat experimental, and most operators only work on RGB images.

Java 2D's image processing model, based on BufferedImageOps

Source Image as a BufferedImage

short[] threshold = new short[256]; for (int i = 0; i < 256; i++) threshold[i] = (i < 128) ? (short)0 : (short)255; BufferedImageOp thresholdOp = new LookupOp(new ShortLookupTable(0, threshold), null); // Instantiate the image operation of our choice //LookupOp is used as one of the image operations included in the Java 2D // it implements the BufferedImageOp interface BufferedImage destination = thresholdOp.filter(source, null); //Call the operation's filter() method with the source image //The source is processed and the destination image is returned . /*If we 've already created a BufferedImage that will hold the * destination image, you can pass it as the second parameter to * filter() .*/ // If we pass null, a new destination BufferedImage is created.

Filtering

public BufferedImage filter (BufferedImage sr c, BufferedImage dest )     src : The BufferedImage to be filtered dest : The BufferedImage in which to store the results$ Returns the filtered BufferedImage. Throws IllegalArgumentException If the source and/or destination image is not compatible with the types of images allowed by the class implementing this filter.

    Performs a single-input/single-output operation on a BufferedImage.

If the color models for the two images do not match, a color conversion into the destination color model is performed. If the destination image is null, a BufferedImage with an appropriate ColorModel is created.

An IllegalArgumentException may be thrown if the source and/or destination image is incompatible with the types of images $ allowed by the class implementing this filter.

Convolution

Operation

      Combine the colors of a source pixel and its neighbors to determine the color of a destination pixel.  The combination is specified using a

kernel,

 A linear operator that determines the proportion of each source pixel color used to calculate the destination pixel color. The kernel as a template is overlaid on the image to perform a convolution on one pixel at a time. As each pixel is convoluted, the template is moved to the next pixel in the source image and the convolution process is repeated.

A source copy of the image is used for input values for the convolution, and all output values are saved into a destination copy of the image. Once the convolution operation is complete, the destination image is returned. The center of the kernel can be thought of as overlaying the source pixel being convoluted.

Convolution

Operation Examples

  The following code creates a ConvolveOp that combines equal amounts of each source pixel and its neighbors. This technique results in a blurring effect. float ninth = 1.0f / 9.0f; float[] blurKernel = { ninth, ninth, ninth, ninth, ninth, ninth, ninth, ninth, ninth }; BufferedImageOp blur = new ConvolveOp(new Kernel(3, 3, blurKernel));

Convolution Operation Examples cont’d

Another common convolution kernel emphasizes the edges in the image.

This operation is commonly called

edge detection.

Unlike the other kernels, this kernel's coefficients do not add up to 1. float[] edgeKernel = { 0.0f, -1.0f, 0.0f, -1.0f, 4.0f, -1.0f, 0.0f, -1.0f, 0.0f

}; BufferedImageOp edge = new ConvolveOp(new Kernel(3, 3, edgeKernel));     How the edge detection kernel is used to operate in an area that is entirely one color?

Each pixel will end up with no color (black) because the color of surrounding pixels cancels out the source pixel's color. Bright pixels surrounded by dark pixels will remain bright. Notice how much darker the processed image is in comparison with the original. This happens because the elements of the edge detection kernel don't add up to 1.

 

Convolution Operation Examples

A simple variation on edge detection is the

sharpening

kernel. The source image is added into an edge detection kernel as follows:

cont’d

      0.0 -1.0 0.0

0.0 -1.0 0.0

0.0 0.0 0.0

0.0 -1.0 0.0

-1.0 4.0 -1.0 + 0.0 1.0 0.0 = -1.0 5.0 -1.0

0.0 0.0 0.0

0.0 -1.0 0.0

The sharpening kernel is only one possible kernel that sharpens images.

float[] sharpKernel = { 0.0f, -1.0f, 0.0f, -1.0f, 5.0f, -1.0f, 0.0f, -1.0f, 0.0f }; BufferedImageOp sharpen = new ConvolveOp( new Kernel(3, 3, sharpKernel), ConvolveOp.EDGE_NO_OP, null);

the convolution operation takes a source pixel's neighbors into account, but source pixels at the edges of the image don't have neighbors on one side . The ConvolveOp class includes constants that specify what the behavior should be at the edges. The EDGE_ZERO_FILL constant specifies that the edges of the destination image are set to 0.

The EDGE_NO_OP constant spec ifies that source pixels along the edge of the image are copied to the destination without being modified. If you don't specify an edge behavior when constructing a ConvolveOp, EDGE_ZERO_FILL is used