On disk or on a server it’s considered a file such as png.
Once the file is loaded and decoded into an array of pixels in memory it’s an image.
To render the image it needs to be copied over to video memory which is a texture.
In webgl for example you’d first create an html image object from the png then a webgl texture object from the image.
If you delve into webgl you can get pixels back into an image. Basically render the texture to a framebuffer and call getpixels to get an array of the pixels. Then create an html canvas of the same size, get an image buffer from that and copy the array of pixels to that. Finally canvas.imageUrl will give a data url (aka base64 version of a png) that you can load into an image or download. That’s basically how taking a screenshot works.