The method was something I had come up with after the 9/11 where among others cnn.dk (which is one of the sites for which the caching was designed) went down due to insane load spikes.
As Karateka correctly states, outputting pure html can increase the performance up to 3000%. especially because you can then use i.e. a kernel web server to speed up things even more (zero-copying etc.).
Shortly put, the method worked like this:
The webserver served all pages (except pages that had to be dynamic (like welcome <username>) as a static page (ie. rewrite url -> static url). Then a backend system watched the Database for changes, and updated the relevant static pages, whenever a change occured that infected a given page.
This removed the normal "caching-delay" of caching systems, as every page was updated immediately, once the database change had occured (we used triggers, as it was Oracle).
This also meant that we had to find a way to allow for blocks, like "latest news" to function, without having to change every page on the system, because that block had to be updated (this was found to be best done (by the html-designers), by using Iframes - and putting that blocks in a seperate static file and letting the browser "compile" the page).
This approach, meant that the system could easily disable all none-static requests (or just give them low priority) if the load was high - keeping the rest of the site online.
A system like that, would be very cool for Postnuke, and the cool thing was that we didn't have to optimize our site-generating code - as we just inserted this caching system in front.
What do you think? would this be a good idea for Postnuke? if(!) it could be made to work with postnuke, it could be very cool.
Klavs Klavsen - kl at vsen.dk - http://www.vsen.dk
Working with Unix is like wrestling a worthy opponent.
Working with windows is like attacking a small whining child who is carrying a .38.