Today morning i posted an status msg in Facebook about network delay ..
The speed of light has a fixed upper limit. In fibre, this is about 200,000 Km/s, and it’s about the same for electricity through copper. This means that a signal sent over a cable that runs 2754 Km from Jammu to Kanyakumari would take about 14ms to get through...
So we can calculate a round trip time would be ~ 28ms .. and this is for one cycle only.. suppose it will be happened 250 times then round trip time will be 3 sec. i think .. so you got clear idea .. how latency delay effect HTTP request and response drama :-)
French ISPs have been advertising for years about being able to reach 24Mb/s with DSL lines. Why even bother about website performance if they think everyone has 24Mb/s available?
Bandwidth is actually how much data we can transfer at once. Latency is how fast a byte of data will travel end to end (length of the road and speed of the cars).Here we are discussing about the round trip time:- the latency to travel back and forth.
Latency (round trip time) depends mainly on the distance between you and your peers.
Take that distance, divide by the speed of light, divide again by 66% (slowness of a light fiber), and multiply by two to have back and forth. Then add about 10 to 20ms for your hardware, your ISP infrastructure, and the web server hardware and network. You will have the minimal latency you may hope for (but never reach):-
Latency (round trip) = 2 x (distance) / (0.66 x speed of light) + 20ms
For example In France(France has better figures than most other countries, so please expect worse than this figures) the latency of usual DSL lines goes from 30ms (French websites and CDNs) to 60-70ms (big players in Europe).We can expect 100 to 200ms for an US website with no relay in Europe.
3G phone networks usually have an added tax of 100ms, sometimes more. VPN, bad proxies, antivirus software, badly written portals and badly set up internal networks may also noticeably
increase latency.
Big companies have their own private "serious" direct connection to Internet. They also often have (at least in France) networks with filtering firewalls, complex architecture between the head office and branch offices, and sometimes overloaded switches and routers. You can expect an added tax of 50ms to 250ms compared to a simple DSL line.
So 50ms is really small, so latency isn’t so important, is it??
Round trip time is a primary concern.Mostly, your browser waits; it waits because of the latency.
When you do a request, you have to wait a few milliseconds to let the server generate the response, but also a few to have your request and its response travel back and forth. Each time you perform a request, you will have to wait one round trip time.
Suppose an website home page required 250 requests and Microsoft Internet Explorer 7 has two parallel download queues, so that’s 125 requests each. With a standard round trip time of 60ms we will be assured to wait at least 7.5 seconds before the page fully loads.Then we have to add the time needed to download and process the files themselves.
It's All about TCP Game ??
TCP is the protocol we use to connect to a web server and then send it our request. It’s like when you’re chatting on the phone: you never directly tell what you plan to tell, you first say "hello", wait for your peer to say "hello", then ask an academic "what’s up?" and wait for an answer (that you probably won’t even listen to but you will wait for it anyway). In Internet’s life, this courtesy is named TCP.TCP sends a "SYN" in place of "hello", and gets a "SYN-ACK" back, as an answer. The more latency you have, the more this initialization will take time.
DNS before TCP ??
That’s not all. Beforesaying “hello” to your friend on the phone, you have to dial his phone number. For internet it’s the IP address. Either your browser performed a request to the same domain a few seconds before and it may reuse the same result, or it has to perform a DNS request. This request may be in your ISP cache (cheap) or needs to be sent to a distant server (expensive if the domain name server is far away).
For each TCP connection, you will have to wait again, a time depending on the latency: latency to your ISP if you find a result on your ISP cache, latency to the DNS if not.
No !! IT's not end .. meet with UDP protocol :-)
DNS often uses the UDP protocol. UDP is a simple “quick and cheap” request/response protocol, with no need to establish a connection before like TCP has. However, when the response that weight over 512 bytes, it may either send a larger response (EDNS specification) or ask for TCP. Large DNS responses used to be rare in the past, but now DNS uses a security extension (DNSSEC) that requires larger responses.
The problem is that many badly configured firewalls still block DNS responses of more than 512 bytes. A few others will block the UDP fragmentation needed for response of more than 1.5KB (UDP fragmentation is a way to send the response with multiple UDP packets, as each one is limited in size). For short: You may well have UDP DNS requests first then a fallback to TCP.
If that happens, the client first requests in UDP, the server answers “please go on TCP”, client opens a TCP connection (SYN + SYN-ACK) and then asks again. In place of one round trip time, we now have three.
Finally ??
A simple 10KB image will need 3 round trips. jQuery (77KB) will need 7 round trips. At 60 to 100ms the round trip time, it is easy to understand that latency is far more important than anything else.
So we have seen that how how slow start and congestion control affect the throughput of a network connection. Each network roundtrip is limited by how long it takes photons or electrons to get through, and anything we can do to reduce the number of roundtrips should reduce total page download time, right? Well, it may not be that simple. We only really care about roundtrips that run end-to-end.Latency has been a problem whenever signals have had to be transmitted over a distance. Whether it is a rider on a horse, or electrons running through metal, each has had its own problems with it.
“It's not about how to achieve your dreams, it's about how to lead your life, ... If you lead your life the right way, the karma will take care of itself, the dreams will come to you.” ― Randy Pausch, The Last Lecture
Sunday, December 26, 2010
Third party tools that helps in optimization
Of course First name is YUI Compressor. it minifying both Java script and CSS files. The YUI Compressor needs Java to work, so you will need to be sure to have a Java runtime installed.
Second name is OptiPng. it's PNG optimization tools which you can run from the command line exist. checkout http://www.phpied.com/png-optimization-tools/ & http://optipng.sourceforge.net/
Third name is CSSEmbed. It's tool to automatically embed images into CSS files as data URIs. This is a very small, simple tool that reads in a CSS file, identifies the images referenced within, converts them to data URIs, and outputs the resulting style sheet. The newly-created stylesheet is an exact duplicate of the original, complete with comments and indentation intact; the only difference is that all references to image files have been replaced with data URIs. download it from http://github.com/nzakas/cssembed/
Fourth one is used for optimization jpeg files using a tool like JPEGtran. it covers following tasks.
Second name is OptiPng. it's PNG optimization tools which you can run from the command line exist. checkout http://www.phpied.com/png-optimization-tools/ & http://optipng.sourceforge.net/
Third name is CSSEmbed. It's tool to automatically embed images into CSS files as data URIs. This is a very small, simple tool that reads in a CSS file, identifies the images referenced within, converts them to data URIs, and outputs the resulting style sheet. The newly-created stylesheet is an exact duplicate of the original, complete with comments and indentation intact; the only difference is that all references to image files have been replaced with data URIs. download it from http://github.com/nzakas/cssembed/
Fourth one is used for optimization jpeg files using a tool like JPEGtran. it covers following tasks.
- tripping meta data (meta is sometimes bulky and useless for web display)
- Optimizing Huffman tables or
- Convert a JPEG to progressive encoding
read more how use JPEGtran http://www.phpied.com/installing-jpegtran-mac-unix-linux/
Another hack to render heavy HTML pages
when HTML page is loaded, browser needs to do a lot work. It has to parse HTML, build elements collections (so things like
One quick solution is that we don’t need to show all 500KB text at once, we can pick a first few sentences and push them to the screen so the user can start reading while browser parses the rest of the page.
How we do same ??
To make all this large text invisible to the browser, all we have to do is to comment it:-
<body>
<!--
<p>Well, LARGE HTML HERE ...</p>
-->
</body>
getElementsByTagName()
can work faster), match CSS rules, etc., etc. And then, finally, render all those elements—you may know this process as repaint. Repainting is one of the slowest process in browsers.One quick solution is that we don’t need to show all 500KB text at once, we can pick a first few sentences and push them to the screen so the user can start reading while browser parses the rest of the page.
How we do same ??
To make all this large text invisible to the browser, all we have to do is to comment it:-
<body>
<!--
<p>Well, LARGE HTML HERE ...</p>
-->
</body>
When the text content was commented out, the page parsing took quickly.
So, we have a commented text, what now? Actually, HTML comment is not just a hidden code chunk, it’s a DOM node which can be easily accessed. Now we need to find this node and parse its content into a DOM tree:-
var elems = document.body.childNodes;
for (var i = 0, il = elems.length; i < il; i++) {
var el = elems[i];
if (el.nodeType == 8) { //it’s a comment
var div = document.createElement('div');
div.innerHTML = el.nodeValue;
// now DIV element contains parsed DOM elements so we can work with them
break;
}
}
Since such plain text parsing doesn’t require browser to do CSS matching, repainting and other stuff that it normally does on page parsing, it also performs very fast.
Thursday, December 23, 2010
Few more thoughts on script loaders in websites
Last week JS GURU Steve Souders (Google) released his ControlJS project. The goal of the project is that to provide freedom to developer to load js files and execute them later on a page as per user action.
In our shiksha.com, we already applied same technique. we load heavy dynamic pages in overlay (modal box) through AJAX but initially we encountered with one problem .. if we load a page with AJAX and suppose that page contain inline JS code .. then that JS code will not be executed. so finally we used some technique/ hack and solved issue.
Actually we parse whole inline JS and CSS that comes in script and css html tag and evaled it later once we get ajax success callback. here is code for same.
function ajax_parseJs(obj)
{
var scriptTags = obj.getElementsByTagName('SCRIPT');
var string = '';
var jsCode = '';
for(var no=0;no<scriptTags.length;no++){
if(scriptTags[no].src){
var head = document.getElementsByTagName("head")[0];
var scriptObj = document.createElement("script");
scriptObj.setAttribute("type", "text/javascript");
scriptObj.setAttribute("src", scriptTags[no].src);
}else{
if(navigator.userAgent.indexOf('Opera')>=0){
jsCode = jsCode + scriptTags[no].text + 'n';
}
else
jsCode = jsCode + scriptTags[no].innerHTML;
}
}
if(jsCode)ajax_installScript(jsCode);
}
function evaluateCss(obj)
{
var cssTags = obj.getElementsByTagName('STYLE');
var head = document.getElementsByTagName('HEAD')[0];
for(var no=0;no<cssTags.length;no++){
head.appendChild(cssTags[no]);
}
}
function ajax_installScript(script)
{
if (!script)
return;
if (window.execScript){
window.execScript(script)
}else if(window.jQuery && jQuery.browser.safari){ // safari detection in jQuery
window.setTimeout(script,0);
}else{
window.setTimeout( script, 0 );
}
}
So i thought that can we do same for script loading ? i think it's not a big deal to load script and execute it
when developer want. here is code for same.
function loadScript(url, callback){
var script = document.createElement("script")
script.type = "text/javascript";
if (script.readyState){ //IE
script.onreadystatechange = function(){
if (script.readyState == "loaded" ||
script.readyState == "complete"){
script.onreadystatechange = null;
callback();
}
};
} else { //Others
script.onload = function(){
callback();
};
}
script.src = url;
document.getElementsByTagName("head")[0].appendChild(script);
}
var script = document.createElement("script");
script.type = "text/cache";
script.src = "foo.js";
script.onload = function(){
//script has been loaded but not executed
};
document.body.insertBefore(script, document.body.firstChild);
//at some point later
script.execute();
Hope above techniques are clear and you don't have any doubts .. if still you have any query then write me on mail @ tussion @ ymail dot com
Happy coding ... Enjoy XMAS holidays ...
In our shiksha.com, we already applied same technique. we load heavy dynamic pages in overlay (modal box) through AJAX but initially we encountered with one problem .. if we load a page with AJAX and suppose that page contain inline JS code .. then that JS code will not be executed. so finally we used some technique/ hack and solved issue.
Actually we parse whole inline JS and CSS that comes in script and css html tag and evaled it later once we get ajax success callback. here is code for same.
function ajax_parseJs(obj)
{
var scriptTags = obj.getElementsByTagName('SCRIPT');
var string = '';
var jsCode = '';
for(var no=0;no<scriptTags.length;no++){
if(scriptTags[no].src){
var head = document.getElementsByTagName("head")[0];
var scriptObj = document.createElement("script");
scriptObj.setAttribute("type", "text/javascript");
scriptObj.setAttribute("src", scriptTags[no].src);
}else{
if(navigator.userAgent.indexOf('Opera')>=0){
jsCode = jsCode + scriptTags[no].text + 'n';
}
else
jsCode = jsCode + scriptTags[no].innerHTML;
}
}
if(jsCode)ajax_installScript(jsCode);
}
function evaluateCss(obj)
{
var cssTags = obj.getElementsByTagName('STYLE');
var head = document.getElementsByTagName('HEAD')[0];
for(var no=0;no<cssTags.length;no++){
head.appendChild(cssTags[no]);
}
}
function ajax_installScript(script)
{
if (!script)
return;
if (window.execScript){
window.execScript(script)
}else if(window.jQuery && jQuery.browser.safari){ // safari detection in jQuery
window.setTimeout(script,0);
}else{
window.setTimeout( script, 0 );
}
}
So i thought that can we do same for script loading ? i think it's not a big deal to load script and execute it
when developer want. here is code for same.
function loadScript(url, callback){
var script = document.createElement("script")
script.type = "text/javascript";
if (script.readyState){ //IE
script.onreadystatechange = function(){
if (script.readyState == "loaded" ||
script.readyState == "complete"){
script.onreadystatechange = null;
callback();
}
};
} else { //Others
script.onload = function(){
callback();
};
}
script.src = url;
document.getElementsByTagName("head")[0].appendChild(script);
}
var script = document.createElement("script");
script.type = "text/cache";
script.src = "foo.js";
script.onload = function(){
//script has been loaded but not executed
};
document.body.insertBefore(script, document.body.firstChild);
//at some point later
script.execute();
Hope above techniques are clear and you don't have any doubts .. if still you have any query then write me on mail @ tussion @ ymail dot com
Happy coding ... Enjoy XMAS holidays ...
Friday, December 17, 2010
W3C DOM vs. innerHTML which is slower ?
We can check by running test script.
<div id="writeroot" style="width:1px; height:1px; overflow:hidden;"></div>
<script>
function removeTable() {
document.getElementById('writeroot').innerHTML = '';
}
</script>
W3CDOM 1: Create all elements as needed:-
removeTable();
var x = document.createElement('table');
var y = x.appendChild(document.createElement('tbody'));
for (var i = 0; i < 20; i++) {
var z = y.appendChild(document.createElement('tr'));
for (var j = 0; j < 20; j++) {
var a = z.appendChild(document.createElement('td'));
a.appendChild(document.createTextNode('*'));
}
}
document.getElementById('writeroot').appendChild(x);
Result is 55 % slower as compare others.
W3CDOM 2: Create elements once, then clone:-
removeTable();
var x = document.createElement('table');
var y = x.appendChild(document.createElement('tbody'));
var tr = document.createElement('tr');
var td = document.createElement('td');
var ast = document.createTextNode('*');
for (var i = 0; i < 20; i++) {
var z = y.appendChild(tr.cloneNode(true));
for (var j = 0; j < 20; j++) {
var a = z.appendChild(td.cloneNode(true));
a.appendChild(ast.cloneNode(true));
}
}
document.getElementById('writeroot').appendChild(x);
tableMethods:-
removeTable();
var x = document.createElement('table');
var y = x.appendChild(document.createElement('tbody'));
for (var i = 0; i < 20; i++) {
var z = y.insertRow(0);
for (var j = 0; j < 20; j++) {
var a = z.insertCell(0).appendChild(document.createTextNode('*'));
}
}
document.getElementById('writeroot').appendChild(x);
Result is 50 % slower as compare others.
INNERHTML 1: concatenate one string:-
removeTable();
var string = '<table><tbody>';
for (var i = 0; i < 20; i++) {
string += '<tr>';
for (var j = 0; j < 20; j++) {
string += '<td>*</td>';
}
string += '</tr>';
}
string += '</tbody></table>';
document.getElementById('writeroot').innerHTML = string;
Result is 5 % fastest as compare others.
INNERHTML 2: push and join:-
removeTable();
var string = new Array();
string.push('<table><tbody>');
for (var i = 0; i < 20; i++) {
string.push('<tr>');
for (var j = 0; j < 20; j++) {
string.push('<td>*</td>');
}
string.push('</tr>');
}
string.push('</tbody></table>');
var writestring = string.join('');
document.getElementById('writeroot').innerHTML = writestring;
Result is 2% slower than others tests.
Actual results are as follows.
Columns as as follows. innerHTML1,innerHTML2,W3CDOM 1,W3CDOM 2,tableMethods and No of Tests
Chrome 8.0.552 197 194 617 647 634 10
Chrome 9.0.597 175 180 349 362 398 5
Chrome 10.0.612 202 207 743 718 684 3
Firefox 3.6.11 93 90 81 71 79 1
Firefox 3.6.12 208 204 177 150 172 4
Firefox 3.6.13 115 112 105 86 106 3
Firefox Beta
4.0b7 786 696 508 409 378 7
IE 6.0 20 84 18 19 10 8
IE 8.0 240 234 43 47 47 9
iPhone 4.2.1 18 19 47 49 49 1
Opera 11.00 772 752 347 491 383 1
Safari 5.0.2 190 196 616 607 589 1
Safari 5.0.3 209 219 623 595 584 11
So Finally inner HTML won and it's fastest among all methods.
<div id="writeroot" style="width:1px; height:1px; overflow:hidden;"></div>
<script>
function removeTable() {
document.getElementById('writeroot').innerHTML = '';
}
</script>
W3CDOM 1: Create all elements as needed:-
removeTable();
var x = document.createElement('table');
var y = x.appendChild(document.createElement('tbody'));
for (var i = 0; i < 20; i++) {
var z = y.appendChild(document.createElement('tr'));
for (var j = 0; j < 20; j++) {
var a = z.appendChild(document.createElement('td'));
a.appendChild(document.createTextNode('*'));
}
}
document.getElementById('writeroot').appendChild(x);
Result is 55 % slower as compare others.
W3CDOM 2: Create elements once, then clone:-
removeTable();
var x = document.createElement('table');
var y = x.appendChild(document.createElement('tbody'));
var tr = document.createElement('tr');
var td = document.createElement('td');
var ast = document.createTextNode('*');
for (var i = 0; i < 20; i++) {
var z = y.appendChild(tr.cloneNode(true));
for (var j = 0; j < 20; j++) {
var a = z.appendChild(td.cloneNode(true));
a.appendChild(ast.cloneNode(true));
}
}
document.getElementById('writeroot').appendChild(x);
Result is 36 % slower as compare others.
removeTable();
var x = document.createElement('table');
var y = x.appendChild(document.createElement('tbody'));
for (var i = 0; i < 20; i++) {
var z = y.insertRow(0);
for (var j = 0; j < 20; j++) {
var a = z.insertCell(0).appendChild(document.createTextNode('*'));
}
}
document.getElementById('writeroot').appendChild(x);
Result is 50 % slower as compare others.
INNERHTML 1: concatenate one string:-
removeTable();
var string = '<table><tbody>';
for (var i = 0; i < 20; i++) {
string += '<tr>';
for (var j = 0; j < 20; j++) {
string += '<td>*</td>';
}
string += '</tr>';
}
string += '</tbody></table>';
document.getElementById('writeroot').innerHTML = string;
Result is 5 % fastest as compare others.
INNERHTML 2: push and join:-
removeTable();
var string = new Array();
string.push('<table><tbody>');
for (var i = 0; i < 20; i++) {
string.push('<tr>');
for (var j = 0; j < 20; j++) {
string.push('<td>*</td>');
}
string.push('</tr>');
}
string.push('</tbody></table>');
var writestring = string.join('');
document.getElementById('writeroot').innerHTML = writestring;
Result is 2% slower than others tests.
Actual results are as follows.
Columns as as follows. innerHTML1,innerHTML2,W3CDOM 1,W3CDOM 2,tableMethods and No of Tests
Chrome 8.0.552 197 194 617 647 634 10
Chrome 9.0.597 175 180 349 362 398 5
Chrome 10.0.612 202 207 743 718 684 3
Firefox 3.6.11 93 90 81 71 79 1
Firefox 3.6.12 208 204 177 150 172 4
Firefox 3.6.13 115 112 105 86 106 3
Firefox Beta
4.0b7 786 696 508 409 378 7
IE 6.0 20 84 18 19 10 8
IE 8.0 240 234 43 47 47 9
iPhone 4.2.1 18 19 47 49 49 1
Opera 11.00 772 752 347 491 383 1
Safari 5.0.2 190 196 616 607 589 1
Safari 5.0.3 209 219 623 595 584 11
So Finally inner HTML won and it's fastest among all methods.
Thursday, December 16, 2010
Difference among all these Load/Utilization/Scalability/Throughput/Concurrency/Capacity?
X = Time/Task, R = Time/Task
Load:- how much work is incoming? or, how big is the back log?
Utilization:- how much of a system's resources are used?
Scalability:- what is the relationship between utilization and R?
Throughput:- X - how many tasks can be done per unit of time?
Concurrency:- how many tasks can we do at once?
Capacity:- how big can X go without making other things unacceptable?
Load:- how much work is incoming? or, how big is the back log?
Utilization:- how much of a system's resources are used?
Scalability:- what is the relationship between utilization and R?
Throughput:- X - how many tasks can be done per unit of time?
Concurrency:- how many tasks can we do at once?
Capacity:- how big can X go without making other things unacceptable?
Sunday, December 12, 2010
More about frontend optimization
Generally people thought that they know Yahoo 14 Rules so it's easy to optimize page rendering and one and only one solution is Ajaxify that page. But it's doesn't mean all websites are rendered fast in web. as i seen and now strongly believe that every website has it's own unique solution to fix their speed related issue and mainly people are unable to identify where is bottleneck. recently i have seen what FB did ?
In Facebook case, it's not easy to handle 500M Users.When Average hours per month per user is more than 5 hours.. google & yahoo has less than 2 hours only. FB has complex Frontend Infrastructure. FB runs 2 JS demons to handle Real time updates and Cache consistency.Main tasks are Incremental updates,In-page writes,Cross-page writes.Every state-changing operations are recorded and send to backend. Backend check when a write operation is detected, send a signal to the client to invalidate the cache.So usually user browses FB with three version. 1.cached version 2.state-changing version 3.Restored version.
This is one frontend solution that FB uses for caching other big ones are "BIG PIPE" and "Quickling" these are very advance techniques that are still need for me to understand
but what i know till now is :-
1. Use Network Latency and Page rendering delta time for other work
2. Try to reduce domcontentloaded and window load time gap
3. Use AJAX but in smart way .. mean Time-to-Interact should be very less i.e. page would fully render, but be frozen,User can't interact while JavaScript is being fetched/parsed/executed
Performance is hard so please think twise before move ahead .. now come to AJAX .. everybody know how use AJAX but very less people know AJAX design pattern :P
When AJAX call occurred it goes into following steps:-
1. Round trip Time: The first step is the amount of time between when the browser sends
the request and the time it receives the response
2. Parse Time: Next, the response returned from the server has to be parsed
3. JavaScript/CSS Download Time: [I believe you are smarter so you will download Widget's JS and CSS file lazy ;) ] Each response can indicate it needs more JavaScript/CSS before the
content can be used
4. Render Time : The amount of time it takes to actually change the display via innerHTML
So i would like know what solutions you have to fix above four issues ? how you optimize these areas ?
Best of Luck !!! Happy Coding !!!
In Facebook case, it's not easy to handle 500M Users.When Average hours per month per user is more than 5 hours.. google & yahoo has less than 2 hours only. FB has complex Frontend Infrastructure. FB runs 2 JS demons to handle Real time updates and Cache consistency.Main tasks are Incremental updates,In-page writes,Cross-page writes.Every state-changing operations are recorded and send to backend. Backend check when a write operation is detected, send a signal to the client to invalidate the cache.So usually user browses FB with three version. 1.cached version 2.state-changing version 3.Restored version.
This is one frontend solution that FB uses for caching other big ones are "BIG PIPE" and "Quickling" these are very advance techniques that are still need for me to understand
but what i know till now is :-
1. Use Network Latency and Page rendering delta time for other work
2. Try to reduce domcontentloaded and window load time gap
3. Use AJAX but in smart way .. mean Time-to-Interact should be very less i.e. page would fully render, but be frozen,User can't interact while JavaScript is being fetched/parsed/executed
Performance is hard so please think twise before move ahead .. now come to AJAX .. everybody know how use AJAX but very less people know AJAX design pattern :P
When AJAX call occurred it goes into following steps:-
1. Round trip Time: The first step is the amount of time between when the browser sends
the request and the time it receives the response
2. Parse Time: Next, the response returned from the server has to be parsed
3. JavaScript/CSS Download Time: [I believe you are smarter so you will download Widget's JS and CSS file lazy ;) ] Each response can indicate it needs more JavaScript/CSS before the
content can be used
4. Render Time : The amount of time it takes to actually change the display via innerHTML
So i would like know what solutions you have to fix above four issues ? how you optimize these areas ?
Best of Luck !!! Happy Coding !!!
Friday, December 3, 2010
Serve Pre-Generated Static Files Instead Of Dynamic Pages
Well .. still i am not sure but i read few articles about how scale site performance without putting some extra efforts.i think it's cheapest terminology i have seen ever. We know well that static files have the advantage of being very fast to serve. Read from disk and display. Simple and fast. Especially when caching proxies are used. The issue is how do you bulk generate the initial files, how do you serve the files, and how do you keep the changed files up to date? specially regenerate static pages when changes occur... When a new entity is added to system hundreds of pages could be impacted, which would require the effected static pages to be regenerated.It's a very pragmatic solution and rock solid in operation. see more detail http://eventseer.net/p/thomas_brox_roest/whiteboardentry/13/
Subscribe to:
Posts (Atom)