**: Backface culling discards polygons facing away from the camera, and the Z-buffer ensures only the closest polygon's pixels are drawn, solving the depth ordering problem that doesn't exist in simple 2D graphics.
**: Instead of each polygon being a single color, images are mapped onto the polygon mesh by assigning each vertex a coordinate in a texture image, like applying decals to a model kit.
**: While a CPU executes instructions sequentially on individual data, a GPU runs the same instruction on thousands of data points simultaneously, making it ideal for rasterizing polygons or checking Z-buffer values en masse. This massively parallel design also makes GPUs useful for machine learning and scientific simulations.
**: Modern GPUs allow developers to inject custom programs called shaders into the rendering pipeline: vertex shaders compute screen positions per polygon corner, and fragment shaders can alter each pixel's color for effects like stylized coloring.
Leverage the GPU's architecture — executing one instruction across thousands of data points simultaneously — for non-graphics tasks like machine learning, scientific simulation, or password hashing, not just polygon rasterization.