It's not possible to control VRAM how you describe. The runtime has no idea which textures you're going to need shortly. It can only guess, and when it gets it wrong, the game will become noticably choppy as it transfers textures to the GPU (it's a slow process). And if there are X megabytes of textures to show on-screen, you need X megabytes of VRAM, regardless of whatever limit you want.
I haven't fully designed how the system will work yet, I'm still working on it, since it's important to get something which is useful and will work well before I start coding. But it'll probably take the form of:
- Layout settings (load and free the layout's textures on an application or layout lifetime)
- System actions (dump all VRAM, dump currently unused VRAM. or cache certain objects for customisation)
- Object settings (load and free object textures in the layout or object lifetime) although this is not as effective a way to do it
If you have any other suggestions how it could be configured I'm listening (now's the time to get your ideas in), however, in short, the user will have to decide how to load and unload VRAM. The runtime can't second guess what your game is going to do, nor can it magically lower the minimum requirements to less than what they really are.