I've read up on it, as as far as I understand it, that's still incorrect, as I don't think C2 is blitting at all, I'm almost 100% positive it's billboarding instead (using flat texture mapped camera facing polygons). Even from the link you posted, it describes rendering being the process of having a scene file that is converted into a raster image, the drawing of the pixels themselves being a part of the process. I've rendered stuff in plenty of 3d packages, and none of them have ever wasted time rendering offscreen pixels, which would be a completely pointless waste of time, and I showed that C2 does not do that in my example above.
It's my understanding that first, a scene 'file' is generated that contains all of the vertex information for all of the objects in the scene (Ashley reuses the data generated for collision polygons as an optimization). Then that data is used to draw the screen from, drawing the parts of each object that are on screen back to front. This can cause overdraw by drawing the same pixel multiple times if that pixel is overlapped by multiple objects. While the vertex information for offscreen objects is still there, it isn't used for the actual drawing of the image itself because the graphics card only draws what is inside the screen.
I could easily believe that the graphics driver would cut down the undrawn vertex information to only what objects are on screen, which would make sense, but the drawn image itself? No. Cards have limits to their pixel fill rates and drawing offscreen pixels would be terribly wasteful. Also, the graphics driver or card having to check if the geometry is onscreen is a calculation that the driver can likely do far faster automatically than anything that could be done to try to optimize with JavaScript while still having the objects available to process in code/events.
In practice, it doesn't matter even if either of our understanding is incorrect. Not even minecraft draws things out into infinity - it, like games all the way back to the nes and earlier, create/generate and destroy/store objects based upon distance because even writing the most optimized assembly possible wouldn't be enough to overcome the fact that computers have limits. If someone wants to have a million objects in a layout like Bobofet, then they should implement a dynamic system to load and destroy objects as they get close to/far from the screen, same way everybody else does it.
Bobofet - if you're using a grid like minecraft, saving/loading the instances to/from an array would work great for that. Also, you might want to do a distance check for the tiles you're checking for collisions with to reduce the number of collision checks. Also, tiled objects render faster than lots of small sprites taking up the same space, or as Tokinsom suggested, paste the tiles to canvas then load the canvas image into a sprite.
Rexrainbow was actually talking about possibly making a plugin that would load/create or save/destroy instances based on distance automatically - I reccommend bugging him for it, because I want that plugin too. :)