rabidsheep - everything I've heard from people in the know indicates that is incorrect. GPUs are extremely optimized as chip makers try to get every bit of performance out of them as they can. There's no reason at all for a card to draw offscreen pixels as it's a complete waste of resources (unless you're doing something tricky intentionally like the what Ashley does to get text rendering in webGL by drawing text to an offscreen canvas then converting it into a texture that webGL can use).
You can demonstrate this by making a layout and sprite 1,000,000,000 pixels wide and tall. That would require the graphics card to process 1,000,000,000,000,000,000 pixels per frame (which no current card could handle) and I still get 60 fps.
Unless I'm misunderstanding your terminology somehow. I'm not sure how you consider rendering different from blitting to the screen.
If you think they're still being rendered because you're noticing the fact that simply having tens of thousands of offscreen objects in the layout affects the performance, that's not because of rendering, that's because objects have a little bit of overhead code-wise even if they're not doing anything at all, and they still get processed by collision and variable checks and such.