Speaking of Physics, I am reminded of Phys-X and how that didn't pan out. The positive though was they were bought by nVidia and now the same technologies are running off nVidia hardware, without another card needed.
I think nVidia made a dumb mistake there. Why would anyone make hardware accelerated physics an important part of their game, if it was going to be exclusive to either ati, or nvidia architectures? They should have licensed it to ATI, or made the implementation free, and earned their money through licensing it to game developers. They doomed it to being only a minor decorative thing, instead of something that could have drastically enhanced gameplay. And did any non-casual games ever use fluid? Now I'm sure someone will make an opencl or directcompute physics engine, or implementation of havoc.
Here a very good article by Santiago Orgaz, maker of XNormal, about Sparse Voxel Octree:
http://santyhammer.blogspot.com/2009/03/future-of-realtime-graphics.html
Inside, there is a good youtube video by Id, showing their future tech in games.
And a John Carmack interview about id Tech 6
http://www.pcper.com/article.php?aid=532
.
Carmack mentioned CUDA alot, I thought it was strange, seeing as how he uses openGL for his engines. I'd like to see a more recent interview to see if he's using openCL with tech 5 or tech 6.
I doubt that graphics cards will ever be truly replaced.
There was a really good interview with Tim Sweeney (of Epic Games/unreal engine fame), where he talked about what would be possible graphically with a combined CPU/GPU solution, but the only links I found to that interview are broken now. Google "Tim Sweeney interview CPU GPU" to find alot of other interesting interviews on the subject.
Looking at the long term future, the next 10 years or so, my hope and expectation is that there will be a real convergence between the CPU, GPU and non traditional architectures like the PhysX chip from Ageia, the Cell technology from Sony. You really want all those to evolve in the way of a large scale multicore CPU that has a lot of non traditional computing power as a GPU has now. A GPU processes a huge number of pixels in parallel using relatively simply control flow, CPU's are extremely good at random access logic, lots of branching, handling cache and things like that. I think really, essential, graphics and computing need to evolve together to the point where the future renderers I hope and expect will look a lot more like a software renderer from previous generations than a fixed function rasterizer pipeline and the stuff we have currently. I think GPU's will ultimately end up being... you know when we look at this 10 years from now, we will look back at GPU's being kinda a temporary fixed function hardware solution, to a problem that ultimately was, just general computing.