# Forward and Backward Mapping for Computer Vision | by Javier Martínez Ojeda | May, 2023

The forward mapping process consists of the simple image transformation process that has been discussed in the introduction and in the previous article: it iterates over all the pixels of the image, and the corresponding transformation is applied to each pixel individually. However, those cases in which the new position of the transformed pixel falls outside the image domain, an example of which is shown below, must be taken into account. Transformed image’s pixels fall outside the original image domain. Image by author

To carry out the forward mapping process, first define a function that receives as parameters the original coordinates of a pixel. This function will apply a transformation to the original pixel coordinates, and return the new coordinates of the pixel after the transformation. The following code example shows the function for the rotation transformation.

`def apply_transformation(original_x: int, original_y: int) -> Tuple[int, int]:# Define the rotation matrix   rotate_transformation = np.array([[np.cos(np.pi/4), -np.sin(np.pi/4), 0],[np.sin(np.pi/4),  np.cos(np.pi/4), 0],[0, 0, 1]])# Apply transformation after setting homogenous coordinate to 1 for the original vector.new_coordinates = rotate_transformation @ np.array([original_x, original_y, 1]).T# Round the new coordinates to the nearest pixel   return int(np.rint(new_coordinates)), int(np.rint(new_coordinates))`

Once you have this function, you only need to iterate over each pixel of the image, apply the transformation and check if the new pixel coordinates are within the domain of the original image. If the new coordinates are within the domain, the pixel on the new coordinates of the new image will take the value that the original pixel had in the original image. If it falls outside the image, the pixel is omitted.

`def forward_mapping(original_image: np.ndarray) -> np.ndarray:# Create the new image with same shape as the original onenew_image = np.zeros_like(original_image)for original_y in range(original_image.shape):for original_x in range(original_image.shape):# Apply rotation on the original pixel's coordinatesnew_x, new_y = apply_transformation(original_x, original_y)# Check if new coordinates fall inside the image's domainif 0 <= new_y < new_image.shape and 0 <= new_x < new_image.shape:new_image[new_x, new_y, :] = original_image[original_x, original_y, :]return new_image`

The result of applying a rotation transformation with foward mapping can be seen in the image below, where on the left is the original image, and on the right the transformed image. It is important to note that for this image the origin of coordinates is in the upper left corner, so the image rotates around that point anti-clockwise. Results of applying Forward Mapping. Left image extracted from MNIST Dataset . Full image by author

Regarding the result of the transformation, it can be seen how the transformed image does not have the full-black background that the original one has, but instead has many white stripes. This happens, as mentioned in the introduction, because the pixels of the original image do not always map to all the pixels of the new image. Since the new coordinates are calculated by rounding to the nearest pixel, this results in many intermediate pixels never receiving a value. In this case, as the new image is initialized with all pixels blank, the pixels that have not been given a value during the transformation will remain blank, generating those white stripes in the transformed image.

In addition, it should be noted that there is another notable problem: overlaps. This problem occurs when two pixels of the original image are transformed to the same pixel of the new image. For the code used in this article, if there are two pixels of the original image that map to the same pixel of the new image, the new pixel will take the value of the last original pixel that has been transformed, overwriting the value of the first one that was already set.

Categories AI