Today morning i posted an status msg in Facebook about network delay ..
The speed of light has a fixed upper limit. In fibre, this is about 200,000 Km/s, and it’s about the same for electricity through copper. This means that a signal sent over a cable that runs 2754 Km from Jammu to Kanyakumari would take about 14ms to get through...
So we can calculate a round trip time would be ~ 28ms .. and this is for one cycle only.. suppose it will be happened 250 times then round trip time will be 3 sec. i think .. so you got clear idea .. how latency delay effect HTTP request and response drama :-)
French ISPs have been advertising for years about being able to reach 24Mb/s with DSL lines. Why even bother about website performance if they think everyone has 24Mb/s available?
Bandwidth is actually how much data we can transfer at once. Latency is how fast a byte of data will travel end to end (length of the road and speed of the cars).Here we are discussing about the round trip time:- the latency to travel back and forth.
Latency (round trip time) depends mainly on the distance between you and your peers.
Take that distance, divide by the speed of light, divide again by 66% (slowness of a light fiber), and multiply by two to have back and forth. Then add about 10 to 20ms for your hardware, your ISP infrastructure, and the web server hardware and network. You will have the minimal latency you may hope for (but never reach):-
Latency (round trip) = 2 x (distance) / (0.66 x speed of light) + 20ms
For example In France(France has better figures than most other countries, so please expect worse than this figures) the latency of usual DSL lines goes from 30ms (French websites and CDNs) to 60-70ms (big players in Europe).We can expect 100 to 200ms for an US website with no relay in Europe.
3G phone networks usually have an added tax of 100ms, sometimes more. VPN, bad proxies, antivirus software, badly written portals and badly set up internal networks may also noticeably
increase latency.
Big companies have their own private "serious" direct connection to Internet. They also often have (at least in France) networks with filtering firewalls, complex architecture between the head office and branch offices, and sometimes overloaded switches and routers. You can expect an added tax of 50ms to 250ms compared to a simple DSL line.
So 50ms is really small, so latency isn’t so important, is it??
Round trip time is a primary concern.Mostly, your browser waits; it waits because of the latency.
When you do a request, you have to wait a few milliseconds to let the server generate the response, but also a few to have your request and its response travel back and forth. Each time you perform a request, you will have to wait one round trip time.
Suppose an website home page required 250 requests and Microsoft Internet Explorer 7 has two parallel download queues, so that’s 125 requests each. With a standard round trip time of 60ms we will be assured to wait at least 7.5 seconds before the page fully loads.Then we have to add the time needed to download and process the files themselves.
It's All about TCP Game ??
TCP is the protocol we use to connect to a web server and then send it our request. It’s like when you’re chatting on the phone: you never directly tell what you plan to tell, you first say "hello", wait for your peer to say "hello", then ask an academic "what’s up?" and wait for an answer (that you probably won’t even listen to but you will wait for it anyway). In Internet’s life, this courtesy is named TCP.TCP sends a "SYN" in place of "hello", and gets a "SYN-ACK" back, as an answer. The more latency you have, the more this initialization will take time.
DNS before TCP ??
That’s not all. Beforesaying “hello” to your friend on the phone, you have to dial his phone number. For internet it’s the IP address. Either your browser performed a request to the same domain a few seconds before and it may reuse the same result, or it has to perform a DNS request. This request may be in your ISP cache (cheap) or needs to be sent to a distant server (expensive if the domain name server is far away).
For each TCP connection, you will have to wait again, a time depending on the latency: latency to your ISP if you find a result on your ISP cache, latency to the DNS if not.
No !! IT's not end .. meet with UDP protocol :-)
DNS often uses the UDP protocol. UDP is a simple “quick and cheap” request/response protocol, with no need to establish a connection before like TCP has. However, when the response that weight over 512 bytes, it may either send a larger response (EDNS specification) or ask for TCP. Large DNS responses used to be rare in the past, but now DNS uses a security extension (DNSSEC) that requires larger responses.
The problem is that many badly configured firewalls still block DNS responses of more than 512 bytes. A few others will block the UDP fragmentation needed for response of more than 1.5KB (UDP fragmentation is a way to send the response with multiple UDP packets, as each one is limited in size). For short: You may well have UDP DNS requests first then a fallback to TCP.
If that happens, the client first requests in UDP, the server answers “please go on TCP”, client opens a TCP connection (SYN + SYN-ACK) and then asks again. In place of one round trip time, we now have three.
Finally ??
A simple 10KB image will need 3 round trips. jQuery (77KB) will need 7 round trips. At 60 to 100ms the round trip time, it is easy to understand that latency is far more important than anything else.
So we have seen that how how slow start and congestion control affect the throughput of a network connection. Each network roundtrip is limited by how long it takes photons or electrons to get through, and anything we can do to reduce the number of roundtrips should reduce total page download time, right? Well, it may not be that simple. We only really care about roundtrips that run end-to-end.Latency has been a problem whenever signals have had to be transmitted over a distance. Whether it is a rider on a horse, or electrons running through metal, each has had its own problems with it.
“It's not about how to achieve your dreams, it's about how to lead your life, ... If you lead your life the right way, the karma will take care of itself, the dreams will come to you.” ― Randy Pausch, The Last Lecture
Sunday, December 26, 2010
Third party tools that helps in optimization
Of course First name is YUI Compressor. it minifying both Java script and CSS files. The YUI Compressor needs Java to work, so you will need to be sure to have a Java runtime installed.
Second name is OptiPng. it's PNG optimization tools which you can run from the command line exist. checkout http://www.phpied.com/png-optimization-tools/ & http://optipng.sourceforge.net/
Third name is CSSEmbed. It's tool to automatically embed images into CSS files as data URIs. This is a very small, simple tool that reads in a CSS file, identifies the images referenced within, converts them to data URIs, and outputs the resulting style sheet. The newly-created stylesheet is an exact duplicate of the original, complete with comments and indentation intact; the only difference is that all references to image files have been replaced with data URIs. download it from http://github.com/nzakas/cssembed/
Fourth one is used for optimization jpeg files using a tool like JPEGtran. it covers following tasks.
Second name is OptiPng. it's PNG optimization tools which you can run from the command line exist. checkout http://www.phpied.com/png-optimization-tools/ & http://optipng.sourceforge.net/
Third name is CSSEmbed. It's tool to automatically embed images into CSS files as data URIs. This is a very small, simple tool that reads in a CSS file, identifies the images referenced within, converts them to data URIs, and outputs the resulting style sheet. The newly-created stylesheet is an exact duplicate of the original, complete with comments and indentation intact; the only difference is that all references to image files have been replaced with data URIs. download it from http://github.com/nzakas/cssembed/
Fourth one is used for optimization jpeg files using a tool like JPEGtran. it covers following tasks.
- tripping meta data (meta is sometimes bulky and useless for web display)
- Optimizing Huffman tables or
- Convert a JPEG to progressive encoding
read more how use JPEGtran http://www.phpied.com/installing-jpegtran-mac-unix-linux/
Another hack to render heavy HTML pages
when HTML page is loaded, browser needs to do a lot work. It has to parse HTML, build elements collections (so things like
One quick solution is that we don’t need to show all 500KB text at once, we can pick a first few sentences and push them to the screen so the user can start reading while browser parses the rest of the page.
How we do same ??
To make all this large text invisible to the browser, all we have to do is to comment it:-
<body>
<!--
<p>Well, LARGE HTML HERE ...</p>
-->
</body>
getElementsByTagName()
can work faster), match CSS rules, etc., etc. And then, finally, render all those elements—you may know this process as repaint. Repainting is one of the slowest process in browsers.One quick solution is that we don’t need to show all 500KB text at once, we can pick a first few sentences and push them to the screen so the user can start reading while browser parses the rest of the page.
How we do same ??
To make all this large text invisible to the browser, all we have to do is to comment it:-
<body>
<!--
<p>Well, LARGE HTML HERE ...</p>
-->
</body>
When the text content was commented out, the page parsing took quickly.
So, we have a commented text, what now? Actually, HTML comment is not just a hidden code chunk, it’s a DOM node which can be easily accessed. Now we need to find this node and parse its content into a DOM tree:-
var elems = document.body.childNodes;
for (var i = 0, il = elems.length; i < il; i++) {
var el = elems[i];
if (el.nodeType == 8) { //it’s a comment
var div = document.createElement('div');
div.innerHTML = el.nodeValue;
// now DIV element contains parsed DOM elements so we can work with them
break;
}
}
Since such plain text parsing doesn’t require browser to do CSS matching, repainting and other stuff that it normally does on page parsing, it also performs very fast.
Thursday, December 23, 2010
Few more thoughts on script loaders in websites
Last week JS GURU Steve Souders (Google) released his ControlJS project. The goal of the project is that to provide freedom to developer to load js files and execute them later on a page as per user action.
In our shiksha.com, we already applied same technique. we load heavy dynamic pages in overlay (modal box) through AJAX but initially we encountered with one problem .. if we load a page with AJAX and suppose that page contain inline JS code .. then that JS code will not be executed. so finally we used some technique/ hack and solved issue.
Actually we parse whole inline JS and CSS that comes in script and css html tag and evaled it later once we get ajax success callback. here is code for same.
function ajax_parseJs(obj)
{
var scriptTags = obj.getElementsByTagName('SCRIPT');
var string = '';
var jsCode = '';
for(var no=0;no<scriptTags.length;no++){
if(scriptTags[no].src){
var head = document.getElementsByTagName("head")[0];
var scriptObj = document.createElement("script");
scriptObj.setAttribute("type", "text/javascript");
scriptObj.setAttribute("src", scriptTags[no].src);
}else{
if(navigator.userAgent.indexOf('Opera')>=0){
jsCode = jsCode + scriptTags[no].text + 'n';
}
else
jsCode = jsCode + scriptTags[no].innerHTML;
}
}
if(jsCode)ajax_installScript(jsCode);
}
function evaluateCss(obj)
{
var cssTags = obj.getElementsByTagName('STYLE');
var head = document.getElementsByTagName('HEAD')[0];
for(var no=0;no<cssTags.length;no++){
head.appendChild(cssTags[no]);
}
}
function ajax_installScript(script)
{
if (!script)
return;
if (window.execScript){
window.execScript(script)
}else if(window.jQuery && jQuery.browser.safari){ // safari detection in jQuery
window.setTimeout(script,0);
}else{
window.setTimeout( script, 0 );
}
}
So i thought that can we do same for script loading ? i think it's not a big deal to load script and execute it
when developer want. here is code for same.
function loadScript(url, callback){
var script = document.createElement("script")
script.type = "text/javascript";
if (script.readyState){ //IE
script.onreadystatechange = function(){
if (script.readyState == "loaded" ||
script.readyState == "complete"){
script.onreadystatechange = null;
callback();
}
};
} else { //Others
script.onload = function(){
callback();
};
}
script.src = url;
document.getElementsByTagName("head")[0].appendChild(script);
}
var script = document.createElement("script");
script.type = "text/cache";
script.src = "foo.js";
script.onload = function(){
//script has been loaded but not executed
};
document.body.insertBefore(script, document.body.firstChild);
//at some point later
script.execute();
Hope above techniques are clear and you don't have any doubts .. if still you have any query then write me on mail @ tussion @ ymail dot com
Happy coding ... Enjoy XMAS holidays ...
In our shiksha.com, we already applied same technique. we load heavy dynamic pages in overlay (modal box) through AJAX but initially we encountered with one problem .. if we load a page with AJAX and suppose that page contain inline JS code .. then that JS code will not be executed. so finally we used some technique/ hack and solved issue.
Actually we parse whole inline JS and CSS that comes in script and css html tag and evaled it later once we get ajax success callback. here is code for same.
function ajax_parseJs(obj)
{
var scriptTags = obj.getElementsByTagName('SCRIPT');
var string = '';
var jsCode = '';
for(var no=0;no<scriptTags.length;no++){
if(scriptTags[no].src){
var head = document.getElementsByTagName("head")[0];
var scriptObj = document.createElement("script");
scriptObj.setAttribute("type", "text/javascript");
scriptObj.setAttribute("src", scriptTags[no].src);
}else{
if(navigator.userAgent.indexOf('Opera')>=0){
jsCode = jsCode + scriptTags[no].text + 'n';
}
else
jsCode = jsCode + scriptTags[no].innerHTML;
}
}
if(jsCode)ajax_installScript(jsCode);
}
function evaluateCss(obj)
{
var cssTags = obj.getElementsByTagName('STYLE');
var head = document.getElementsByTagName('HEAD')[0];
for(var no=0;no<cssTags.length;no++){
head.appendChild(cssTags[no]);
}
}
function ajax_installScript(script)
{
if (!script)
return;
if (window.execScript){
window.execScript(script)
}else if(window.jQuery && jQuery.browser.safari){ // safari detection in jQuery
window.setTimeout(script,0);
}else{
window.setTimeout( script, 0 );
}
}
So i thought that can we do same for script loading ? i think it's not a big deal to load script and execute it
when developer want. here is code for same.
function loadScript(url, callback){
var script = document.createElement("script")
script.type = "text/javascript";
if (script.readyState){ //IE
script.onreadystatechange = function(){
if (script.readyState == "loaded" ||
script.readyState == "complete"){
script.onreadystatechange = null;
callback();
}
};
} else { //Others
script.onload = function(){
callback();
};
}
script.src = url;
document.getElementsByTagName("head")[0].appendChild(script);
}
var script = document.createElement("script");
script.type = "text/cache";
script.src = "foo.js";
script.onload = function(){
//script has been loaded but not executed
};
document.body.insertBefore(script, document.body.firstChild);
//at some point later
script.execute();
Hope above techniques are clear and you don't have any doubts .. if still you have any query then write me on mail @ tussion @ ymail dot com
Happy coding ... Enjoy XMAS holidays ...
Friday, December 17, 2010
W3C DOM vs. innerHTML which is slower ?
We can check by running test script.
<div id="writeroot" style="width:1px; height:1px; overflow:hidden;"></div>
<script>
function removeTable() {
document.getElementById('writeroot').innerHTML = '';
}
</script>
W3CDOM 1: Create all elements as needed:-
removeTable();
var x = document.createElement('table');
var y = x.appendChild(document.createElement('tbody'));
for (var i = 0; i < 20; i++) {
var z = y.appendChild(document.createElement('tr'));
for (var j = 0; j < 20; j++) {
var a = z.appendChild(document.createElement('td'));
a.appendChild(document.createTextNode('*'));
}
}
document.getElementById('writeroot').appendChild(x);
Result is 55 % slower as compare others.
W3CDOM 2: Create elements once, then clone:-
removeTable();
var x = document.createElement('table');
var y = x.appendChild(document.createElement('tbody'));
var tr = document.createElement('tr');
var td = document.createElement('td');
var ast = document.createTextNode('*');
for (var i = 0; i < 20; i++) {
var z = y.appendChild(tr.cloneNode(true));
for (var j = 0; j < 20; j++) {
var a = z.appendChild(td.cloneNode(true));
a.appendChild(ast.cloneNode(true));
}
}
document.getElementById('writeroot').appendChild(x);
tableMethods:-
removeTable();
var x = document.createElement('table');
var y = x.appendChild(document.createElement('tbody'));
for (var i = 0; i < 20; i++) {
var z = y.insertRow(0);
for (var j = 0; j < 20; j++) {
var a = z.insertCell(0).appendChild(document.createTextNode('*'));
}
}
document.getElementById('writeroot').appendChild(x);
Result is 50 % slower as compare others.
INNERHTML 1: concatenate one string:-
removeTable();
var string = '<table><tbody>';
for (var i = 0; i < 20; i++) {
string += '<tr>';
for (var j = 0; j < 20; j++) {
string += '<td>*</td>';
}
string += '</tr>';
}
string += '</tbody></table>';
document.getElementById('writeroot').innerHTML = string;
Result is 5 % fastest as compare others.
INNERHTML 2: push and join:-
removeTable();
var string = new Array();
string.push('<table><tbody>');
for (var i = 0; i < 20; i++) {
string.push('<tr>');
for (var j = 0; j < 20; j++) {
string.push('<td>*</td>');
}
string.push('</tr>');
}
string.push('</tbody></table>');
var writestring = string.join('');
document.getElementById('writeroot').innerHTML = writestring;
Result is 2% slower than others tests.
Actual results are as follows.
Columns as as follows. innerHTML1,innerHTML2,W3CDOM 1,W3CDOM 2,tableMethods and No of Tests
Chrome 8.0.552 197 194 617 647 634 10
Chrome 9.0.597 175 180 349 362 398 5
Chrome 10.0.612 202 207 743 718 684 3
Firefox 3.6.11 93 90 81 71 79 1
Firefox 3.6.12 208 204 177 150 172 4
Firefox 3.6.13 115 112 105 86 106 3
Firefox Beta
4.0b7 786 696 508 409 378 7
IE 6.0 20 84 18 19 10 8
IE 8.0 240 234 43 47 47 9
iPhone 4.2.1 18 19 47 49 49 1
Opera 11.00 772 752 347 491 383 1
Safari 5.0.2 190 196 616 607 589 1
Safari 5.0.3 209 219 623 595 584 11
So Finally inner HTML won and it's fastest among all methods.
<div id="writeroot" style="width:1px; height:1px; overflow:hidden;"></div>
<script>
function removeTable() {
document.getElementById('writeroot').innerHTML = '';
}
</script>
W3CDOM 1: Create all elements as needed:-
removeTable();
var x = document.createElement('table');
var y = x.appendChild(document.createElement('tbody'));
for (var i = 0; i < 20; i++) {
var z = y.appendChild(document.createElement('tr'));
for (var j = 0; j < 20; j++) {
var a = z.appendChild(document.createElement('td'));
a.appendChild(document.createTextNode('*'));
}
}
document.getElementById('writeroot').appendChild(x);
Result is 55 % slower as compare others.
W3CDOM 2: Create elements once, then clone:-
removeTable();
var x = document.createElement('table');
var y = x.appendChild(document.createElement('tbody'));
var tr = document.createElement('tr');
var td = document.createElement('td');
var ast = document.createTextNode('*');
for (var i = 0; i < 20; i++) {
var z = y.appendChild(tr.cloneNode(true));
for (var j = 0; j < 20; j++) {
var a = z.appendChild(td.cloneNode(true));
a.appendChild(ast.cloneNode(true));
}
}
document.getElementById('writeroot').appendChild(x);
Result is 36 % slower as compare others.
removeTable();
var x = document.createElement('table');
var y = x.appendChild(document.createElement('tbody'));
for (var i = 0; i < 20; i++) {
var z = y.insertRow(0);
for (var j = 0; j < 20; j++) {
var a = z.insertCell(0).appendChild(document.createTextNode('*'));
}
}
document.getElementById('writeroot').appendChild(x);
Result is 50 % slower as compare others.
INNERHTML 1: concatenate one string:-
removeTable();
var string = '<table><tbody>';
for (var i = 0; i < 20; i++) {
string += '<tr>';
for (var j = 0; j < 20; j++) {
string += '<td>*</td>';
}
string += '</tr>';
}
string += '</tbody></table>';
document.getElementById('writeroot').innerHTML = string;
Result is 5 % fastest as compare others.
INNERHTML 2: push and join:-
removeTable();
var string = new Array();
string.push('<table><tbody>');
for (var i = 0; i < 20; i++) {
string.push('<tr>');
for (var j = 0; j < 20; j++) {
string.push('<td>*</td>');
}
string.push('</tr>');
}
string.push('</tbody></table>');
var writestring = string.join('');
document.getElementById('writeroot').innerHTML = writestring;
Result is 2% slower than others tests.
Actual results are as follows.
Columns as as follows. innerHTML1,innerHTML2,W3CDOM 1,W3CDOM 2,tableMethods and No of Tests
Chrome 8.0.552 197 194 617 647 634 10
Chrome 9.0.597 175 180 349 362 398 5
Chrome 10.0.612 202 207 743 718 684 3
Firefox 3.6.11 93 90 81 71 79 1
Firefox 3.6.12 208 204 177 150 172 4
Firefox 3.6.13 115 112 105 86 106 3
Firefox Beta
4.0b7 786 696 508 409 378 7
IE 6.0 20 84 18 19 10 8
IE 8.0 240 234 43 47 47 9
iPhone 4.2.1 18 19 47 49 49 1
Opera 11.00 772 752 347 491 383 1
Safari 5.0.2 190 196 616 607 589 1
Safari 5.0.3 209 219 623 595 584 11
So Finally inner HTML won and it's fastest among all methods.
Thursday, December 16, 2010
Difference among all these Load/Utilization/Scalability/Throughput/Concurrency/Capacity?
X = Time/Task, R = Time/Task
Load:- how much work is incoming? or, how big is the back log?
Utilization:- how much of a system's resources are used?
Scalability:- what is the relationship between utilization and R?
Throughput:- X - how many tasks can be done per unit of time?
Concurrency:- how many tasks can we do at once?
Capacity:- how big can X go without making other things unacceptable?
Load:- how much work is incoming? or, how big is the back log?
Utilization:- how much of a system's resources are used?
Scalability:- what is the relationship between utilization and R?
Throughput:- X - how many tasks can be done per unit of time?
Concurrency:- how many tasks can we do at once?
Capacity:- how big can X go without making other things unacceptable?
Sunday, December 12, 2010
More about frontend optimization
Generally people thought that they know Yahoo 14 Rules so it's easy to optimize page rendering and one and only one solution is Ajaxify that page. But it's doesn't mean all websites are rendered fast in web. as i seen and now strongly believe that every website has it's own unique solution to fix their speed related issue and mainly people are unable to identify where is bottleneck. recently i have seen what FB did ?
In Facebook case, it's not easy to handle 500M Users.When Average hours per month per user is more than 5 hours.. google & yahoo has less than 2 hours only. FB has complex Frontend Infrastructure. FB runs 2 JS demons to handle Real time updates and Cache consistency.Main tasks are Incremental updates,In-page writes,Cross-page writes.Every state-changing operations are recorded and send to backend. Backend check when a write operation is detected, send a signal to the client to invalidate the cache.So usually user browses FB with three version. 1.cached version 2.state-changing version 3.Restored version.
This is one frontend solution that FB uses for caching other big ones are "BIG PIPE" and "Quickling" these are very advance techniques that are still need for me to understand
but what i know till now is :-
1. Use Network Latency and Page rendering delta time for other work
2. Try to reduce domcontentloaded and window load time gap
3. Use AJAX but in smart way .. mean Time-to-Interact should be very less i.e. page would fully render, but be frozen,User can't interact while JavaScript is being fetched/parsed/executed
Performance is hard so please think twise before move ahead .. now come to AJAX .. everybody know how use AJAX but very less people know AJAX design pattern :P
When AJAX call occurred it goes into following steps:-
1. Round trip Time: The first step is the amount of time between when the browser sends
the request and the time it receives the response
2. Parse Time: Next, the response returned from the server has to be parsed
3. JavaScript/CSS Download Time: [I believe you are smarter so you will download Widget's JS and CSS file lazy ;) ] Each response can indicate it needs more JavaScript/CSS before the
content can be used
4. Render Time : The amount of time it takes to actually change the display via innerHTML
So i would like know what solutions you have to fix above four issues ? how you optimize these areas ?
Best of Luck !!! Happy Coding !!!
In Facebook case, it's not easy to handle 500M Users.When Average hours per month per user is more than 5 hours.. google & yahoo has less than 2 hours only. FB has complex Frontend Infrastructure. FB runs 2 JS demons to handle Real time updates and Cache consistency.Main tasks are Incremental updates,In-page writes,Cross-page writes.Every state-changing operations are recorded and send to backend. Backend check when a write operation is detected, send a signal to the client to invalidate the cache.So usually user browses FB with three version. 1.cached version 2.state-changing version 3.Restored version.
This is one frontend solution that FB uses for caching other big ones are "BIG PIPE" and "Quickling" these are very advance techniques that are still need for me to understand
but what i know till now is :-
1. Use Network Latency and Page rendering delta time for other work
2. Try to reduce domcontentloaded and window load time gap
3. Use AJAX but in smart way .. mean Time-to-Interact should be very less i.e. page would fully render, but be frozen,User can't interact while JavaScript is being fetched/parsed/executed
Performance is hard so please think twise before move ahead .. now come to AJAX .. everybody know how use AJAX but very less people know AJAX design pattern :P
When AJAX call occurred it goes into following steps:-
1. Round trip Time: The first step is the amount of time between when the browser sends
the request and the time it receives the response
2. Parse Time: Next, the response returned from the server has to be parsed
3. JavaScript/CSS Download Time: [I believe you are smarter so you will download Widget's JS and CSS file lazy ;) ] Each response can indicate it needs more JavaScript/CSS before the
content can be used
4. Render Time : The amount of time it takes to actually change the display via innerHTML
So i would like know what solutions you have to fix above four issues ? how you optimize these areas ?
Best of Luck !!! Happy Coding !!!
Friday, December 3, 2010
Serve Pre-Generated Static Files Instead Of Dynamic Pages
Well .. still i am not sure but i read few articles about how scale site performance without putting some extra efforts.i think it's cheapest terminology i have seen ever. We know well that static files have the advantage of being very fast to serve. Read from disk and display. Simple and fast. Especially when caching proxies are used. The issue is how do you bulk generate the initial files, how do you serve the files, and how do you keep the changed files up to date? specially regenerate static pages when changes occur... When a new entity is added to system hundreds of pages could be impacted, which would require the effected static pages to be regenerated.It's a very pragmatic solution and rock solid in operation. see more detail http://eventseer.net/p/thomas_brox_roest/whiteboardentry/13/
Tuesday, November 23, 2010
abc of HTTP cookie .. detailed look
we know well about cookie, how we create and how it works but still i seen some gap to understand it well .. might be i am not so much confidence about cookie :P
A web server specifies a cookie to be stored by sending an HTTP header called Set-Cookie. The format of the Set-Cookie header is a string as follows (parts in square brackets are optional):
Set-Cookie: value[; expires=date][; domain=domain][; path=path][; secure]
The first part of the header, the value, is typically a string in the format name=value. Indeed, the original specification indicates that this is the format to use but browsers do no such validation on cookie values. You can, in fact, specify a string without an equals sign and it will be stored just the same. Still, the most common usage is to specify a cookie value as name=value (and most interfaces support this exclusively).
When a cookie is present, and the optional rules allow, the cookie value is sent to the server with each subsequent request. The cookie value is stored in an HTTP header called Cookie and contains just the cookie value without any of the other options. Such as:
Cookie: value
If there are multiple cookies for the given request, then they are separated by a semicolon and space, such as:
Cookie: value1; value2; name1=value1
The next option is domain, which indicates the domain(s) for which the cookie should be sent. Another way to control when the Cookie header will be sent is to specify the path option. Similar to the domain option, path indicates a URL path that must exist in the requested resource before sending the Cookie header. When a cookie is created with an expiration date, that expiration date relates to the cookie identified by name-domain-path-secure. In order to change the expiration date of a cookie, you must specify the exact same tuple.
Keep in mind that the expiration date is checked against the system time on the computer that is running the browser. There is no way to verify that the system time is in sync with the server time and so errors may occur when there is a discrepancy between the system time and the server time.
There are a few reasons why a cookie might be automatically removed by the browser:
Session cookies are removed when the session is over (browser is closed).
Persistent cookies are removed when the expiration date and time have been reached.
If the browser’s cookie limit is reached, then cookies will be removed to make room for the most recently created cookie.
Cookie restrictions:-
There are a number of restrictions placed on cookies in order to prevent abuse and protect both the browser and the server from detrimental effects. There are two types of restrictions: number of cookies and total cookie size. The original specification placed a limit of 20 cookies per domain, which was followed by early browsers and continued up through Internet Explorer 7. During one of Microsoft’s updates, they increased the cookie limit in IE 7 to 50 cookies. IE 8 has a maximum of 50 cookies per domain as well. Firefox also has a limit of 50 cookies while Opera has a limit of 30 cookies. Safari and Chrome have no limit on the number of cookies per domain.
The maximum size for all cookies sent to the server has remained the same since the original cookie specification: 4 KB. Anything over that limit is truncated and won’t be sent to the server.
A web server specifies a cookie to be stored by sending an HTTP header called Set-Cookie. The format of the Set-Cookie header is a string as follows (parts in square brackets are optional):
Set-Cookie: value[; expires=date][; domain=domain][; path=path][; secure]
The first part of the header, the value, is typically a string in the format name=value. Indeed, the original specification indicates that this is the format to use but browsers do no such validation on cookie values. You can, in fact, specify a string without an equals sign and it will be stored just the same. Still, the most common usage is to specify a cookie value as name=value (and most interfaces support this exclusively).
When a cookie is present, and the optional rules allow, the cookie value is sent to the server with each subsequent request. The cookie value is stored in an HTTP header called Cookie and contains just the cookie value without any of the other options. Such as:
Cookie: value
If there are multiple cookies for the given request, then they are separated by a semicolon and space, such as:
Cookie: value1; value2; name1=value1
The next option is domain, which indicates the domain(s) for which the cookie should be sent. Another way to control when the Cookie header will be sent is to specify the path option. Similar to the domain option, path indicates a URL path that must exist in the requested resource before sending the Cookie header. When a cookie is created with an expiration date, that expiration date relates to the cookie identified by name-domain-path-secure. In order to change the expiration date of a cookie, you must specify the exact same tuple.
Keep in mind that the expiration date is checked against the system time on the computer that is running the browser. There is no way to verify that the system time is in sync with the server time and so errors may occur when there is a discrepancy between the system time and the server time.
There are a few reasons why a cookie might be automatically removed by the browser:
Session cookies are removed when the session is over (browser is closed).
Persistent cookies are removed when the expiration date and time have been reached.
If the browser’s cookie limit is reached, then cookies will be removed to make room for the most recently created cookie.
Cookie restrictions:-
There are a number of restrictions placed on cookies in order to prevent abuse and protect both the browser and the server from detrimental effects. There are two types of restrictions: number of cookies and total cookie size. The original specification placed a limit of 20 cookies per domain, which was followed by early browsers and continued up through Internet Explorer 7. During one of Microsoft’s updates, they increased the cookie limit in IE 7 to 50 cookies. IE 8 has a maximum of 50 cookies per domain as well. Firefox also has a limit of 50 cookies while Opera has a limit of 30 cookies. Safari and Chrome have no limit on the number of cookies per domain.
The maximum size for all cookies sent to the server has remained the same since the original cookie specification: 4 KB. Anything over that limit is truncated and won’t be sent to the server.
Monday, November 22, 2010
Web site Front-end optimization headaches
I have seen that typically 80-90% of page response time comes from other things than fetching HTML of the page :- fetching CSS, JavaScript, Images,banners which makes it number one thing to focus your optimization. Sharing few points, specially for front-end optimization. Hope this will be helpful.
1. Try to split initial payload .. first load all required js to load page then other scripts/things.
Lazyload,ajaxload,load scripts in parrller,load script after on windowload, these are different things and you must know about it.When the browser starts downloading an external script, it won’t start any additional downloads until the script has been completely downloaded, parsed, and executed. So load external scripts asynchronously only. we also need to care about inline JS, here we can use callback functions,timers,on window load etc see following chart , here few techniques are listed to load JS scripts.
2. Flushing the Document Early:-
As the server parses the PHP page, all output is written to STDOUT. Rather than being sent immediately, one character, word, or line at a time, the output is queued up and sent to the browser in larger chunks. This is more efficient because it results in fewer packets being sent from the server to the browser. Each packet sent incurs some network latency, so it’s usually better to send a small number of large packets, rather than a large number of small packets. Calling flush() causes anything queued up in STDOUT to be sent immediately. How- ever, simply flushing STDOUT isn’t enough to achieve the type of speedup experienced in the preceding example. The call to flush has to be made in the right place. idea is , flush before a long time taking process, it can be divide as module wise also like header,footer,sidebar,widgets etc.
3. Repaint and Reflow is costly:-
A repaint occurs when a visual change doesn't require recalculation of layout Changes to visibility, colors (text/background), background images, etc.
A reflow occurs when a visual change requires a change in layout Initial page load
browser resize,DOM structure change,layout style change layout information retrieved
4. Eliminate unnecessary cookies:-
Keep cookie sizes as low as possible to minimize the impact on the user response time,Be mindful of setting cookies at the appropriate domain level so other sub-domains are not affected
5. DOM manipulations are the slowest,Never update the DOM, if that's not possible, at least do it as infrequently as possible. Bunch up your updates to the DOM and save them for a later time.
6. Clone the DOM node you want to work with. Now you will be working with a clone of the real node, and the cloned node doesn't exist in the DOM. Updating the cloned node doesn't affect the DOM. When you are done with your manipulations, replace the original node with the cloned node.
7. However, note that the performance problems in point 4, because of the content and rendering reflow that the browser has to do. You might get similar benefits by simply hiding the element first, making the changes, and then showing it.
Saturday, November 20, 2010
MOD_PHP or FASTCGI ?
When we load PHP into Apache as a module (using mod_php), each Apache process we run will also contain a PHP interpreter which in turn will load all the compiled in libraries which themselves are not exactly small.
This means that even if the Apache process that just started will only serve images, it will contain a PHP interpreter with all assigned libraries. That in turn means that said Apache process uses a lot of memory and takes some time to start up (because PHP and all the shared libraries it's linked to need to be loaded). Wasted energy if the file that needs to be served in an image or a CSS file.
FastCGI in contrast loads the PHP interpreter into memory, keeps it there and Apache will only use these processes to serve the PHP requests.
That means that all the images and CSS, flashes and whatever other static content we may have can be served by a much smaller Apache process that does not contain a scripting language interpreter and that does not link in a bunch of extra libraries (think libxml, libmysqlclient, and so on).
Even if we only serve pages parsed by PHP - maybe because we process our stylesheets with PHP and because we do something with the served images - we are theoretically still better off with FastCGI as Apache will recycle its processes here and then (though that's configurable) while FastCGI processes stay there.
And if we go on and need to load-balance your application, FastCGI still can provide advantages: In the common load balancing scenario, we have a reverse proxy or a load balancer and a bunch of backend servers actually doing the work. In that case, if we use FastCGI, the backend servers will be running our PHP application and noting else. No web server loading an interpreter loading our script. Just the interpreter and our script. So we safe a whole lot of memory by not loading another web server in the backend (Yes. FastCGI works over the network).
This means that even if the Apache process that just started will only serve images, it will contain a PHP interpreter with all assigned libraries. That in turn means that said Apache process uses a lot of memory and takes some time to start up (because PHP and all the shared libraries it's linked to need to be loaded). Wasted energy if the file that needs to be served in an image or a CSS file.
FastCGI in contrast loads the PHP interpreter into memory, keeps it there and Apache will only use these processes to serve the PHP requests.
That means that all the images and CSS, flashes and whatever other static content we may have can be served by a much smaller Apache process that does not contain a scripting language interpreter and that does not link in a bunch of extra libraries (think libxml, libmysqlclient, and so on).
Even if we only serve pages parsed by PHP - maybe because we process our stylesheets with PHP and because we do something with the served images - we are theoretically still better off with FastCGI as Apache will recycle its processes here and then (though that's configurable) while FastCGI processes stay there.
And if we go on and need to load-balance your application, FastCGI still can provide advantages: In the common load balancing scenario, we have a reverse proxy or a load balancer and a bunch of backend servers actually doing the work. In that case, if we use FastCGI, the backend servers will be running our PHP application and noting else. No web server loading an interpreter loading our script. Just the interpreter and our script. So we safe a whole lot of memory by not loading another web server in the backend (Yes. FastCGI works over the network).
Monday, October 25, 2010
Why engineers fail and what makes a great engineer ?
Let's start with what makes great ... Today seen a presentation about Netflix company culture. The presentation told about how the company goes about hiring (and firing) of employees.
I don’t work at Netflix, of course (I work for shiksha.com), but I feel strongly that what makes a great employee and a great engineer is the same regardless of where you work. There are a few things that great engineers always do.
Always do it the right way
One of the challenges of software is to learn how to do it the right way. The right way is different depending upon what you’re working on and who you’re working for.Junior engineers tend to have the most trouble with this, but it does happen with senior-level people too. There’s an “emergency” project, or something that seems so different that it must have its own set of rules. That’s bogus.
Good engineers know that the right way applies to all situations and circumstances. If there’s not enough time to do something the right way, then there’s really not enough time to do it. Don’t make compromises, the quality of your work is what ultimately defines you as an engineer. Make sure that all of the code you write is done the right way 100% of the time. Expect excellence from yourself.
Be willing to suffer
This may sound silly, but good engineers are willing to suffer for their work. Show me a great engineer and I’ll show you someone that has, at various points in his or her career, spent days
trying to figure out a problem.Great engineers relish the challenge of a problem that keeps them up day and night, and knowing that it must be solved.
Not-so-great engineers call for help at the first sign of trouble. They routinely ask for help whenever something goes wrong rather than trying to fix it themselves. Their favorite line is, “can you look at this?” Great engineers first and foremost want to solve the problem on their own. Problem solving is a skill, a skill that great engineers take seriously.
Good engineers become great engineers by suffering. Suffering means not asking for help unless you absolutely cannot handle the task. Asking for help is a sign of defeat, so ring that bell infrequently lest you draw unwanted attention to yourself. Be willing to suffer. Spend time toiling over the problem. That’s how you learn.
I am not saying that you should never ask for help. I am saying that you should try to accomplish the task on your own first, and if you get stuck, then ask for help. Don’t simply ask for help every time without first trying to solve the problem yourself. Chances are, you’ll find that you could have figured it out on your own once you know the answer.
Never stop learning
Any engineer who claims that they don’t need to learn anything new is not someone with whom I’d like to work. In some careers, you can get away without learning anything new for years; technology changes too quickly to not pay attention. Your employer is paying you for your expertise and if that expertise goes stale, you become expendable. In order to be a great engineer you must first admit that you don’t know everything, and then you must seek out more knowledge in every way you can.
Identify someone in your current company or organization from which you can learn and attach yourself to him or her. Ask for advice on complex problems to see how they think. Show them solutions you've come up with and ask for a critique. If you can’t identify anyone in your organization that can serve as a mentor, then either you’re not looking hard enough or you’re at the wrong company. If you can’t grow in your current job then it’s time to look for another.
Read blogs. Attend conferences. Go to developer meetups. Great engineers never stop learning.
Share your knowledge
There are some who believe that their sole value is their knowledge, and by sharing that knowledge they therefore make themselves less valuable. Nothing could be farther from the truth. What makes you valuable is not your knowledge, it’s how you make use of your knowledge to create value for your employer. How better to create value from your knowledge than to share it with others?
I’ve interviewed at companies where hording knowledge seemed deeply-rooted at the organizational level. In that type of environment, a fierce competition develops between co-workers, and this opens the door to politics and backstabbing. I don’t want to work in an organization like that. You can’t learn if everyone is keeping information to themselves.
Great engineers want others to know what they know. They aren’t afraid of losing their position because someone else can do the same thing. Great engineers want to see their peers succeed and grow. Organizations rally around people who share information, and as they say in sports, people who make other people on the team better.
Lend a helping hand
Great engineers don’t consider any task to be “beneath” them. Always willing to lend a hand, you can see great engineers helping out junior engineers in between doing their own work. If something has to get done, and no one else is able to do it in time, great engineers volunteer to take on the work. They don’t scoff when asked to help on a project, whether it be small or menial or low-profile. Great engineers are team-focused and therefore are willing to do whatever it takes to help the team. Whether that be writing 1,000 lines of code or editing an image, great engineers jump at the chance to help out.
Take your time
Great engineers aren’t born, they are made. They’re made by following the advice in this post and by hard work. If you’re just starting out, there’s still plenty of time to become a great engineer. Patience is key. You don’t become a great engineer over night. For some it may take a couple years, for others it may take ten. There’s no one keeping score. Strong organizations recognize when someone has the potential to be a great engineer and will guide you along. You prove yourself through your work and how you make your team better. Focus and self-discipline go a long way towards becoming a great software engineer.
Discuss now why engineers fail ? This typically begins the conversation about the Peter principle. The Peter principle says that you’ll keep getting promoted until you finally end up in a job that you can’t do. This happens because the higher up in the organizational structure you move, the less your technical skills matter and the more your people skills matter. So whereas you began in a position that played to your strengths, you end up in one that plays to your weakenesses. This is precisely what Berkun found in his study, that designers were failing due to factors outside of their design skills. That is why designers fail. It’s also why software engineers fail.
software engineers, once they rise high enough in the organizational hierarchy, they need to learn how to work within the organizational structure. Oftentimes, that means gaining the
trust of business partners: engineers need to gain the trust of product managers. Gaining the trust of these business partners means being able to successfully negotiate, compromise, and work towards meeting a common goal without alienating people through your actions and speech. This is typically where people falter in their careers.
This year I’ve had to learn how to play the organizational game. I can honestly say it’s been far more challenging than anything I’ve done before. Dealing with people is much more difficult than dealing with technology, that’s for sure. You need to understand what each person responds to in terms of approach. Some people will easily cave when pressure is applied, others need to be convinced through logical argument while another set may require emotional persuasion. And of course, all of this must be done while making sure that all of these people still respect you and don’t feel manipulated.
Fortunately, my interest and research in social interaction has really helped me thusfar. Understanding what drives people and how to communicate effectively have been key to me. If you have aspirations of moving up in your company, then it would behoove you to also start researching these topics. The only way to really get ahead in business is a better understanding of people. Hard skill jobs such as engineers and designers are commodities that can easily be outsourced if necessary; soft skill jobs requiring you to work with and inspire others will always be in high demand and, as a bonus, can never be outsourced. Mastering people skills ensures employability, and more importantly, ensures that you won’t fail.
original post @ http://www.nczonline.net
I don’t work at Netflix, of course (I work for shiksha.com), but I feel strongly that what makes a great employee and a great engineer is the same regardless of where you work. There are a few things that great engineers always do.
Always do it the right way
One of the challenges of software is to learn how to do it the right way. The right way is different depending upon what you’re working on and who you’re working for.Junior engineers tend to have the most trouble with this, but it does happen with senior-level people too. There’s an “emergency” project, or something that seems so different that it must have its own set of rules. That’s bogus.
Good engineers know that the right way applies to all situations and circumstances. If there’s not enough time to do something the right way, then there’s really not enough time to do it. Don’t make compromises, the quality of your work is what ultimately defines you as an engineer. Make sure that all of the code you write is done the right way 100% of the time. Expect excellence from yourself.
Be willing to suffer
This may sound silly, but good engineers are willing to suffer for their work. Show me a great engineer and I’ll show you someone that has, at various points in his or her career, spent days
trying to figure out a problem.Great engineers relish the challenge of a problem that keeps them up day and night, and knowing that it must be solved.
Not-so-great engineers call for help at the first sign of trouble. They routinely ask for help whenever something goes wrong rather than trying to fix it themselves. Their favorite line is, “can you look at this?” Great engineers first and foremost want to solve the problem on their own. Problem solving is a skill, a skill that great engineers take seriously.
Good engineers become great engineers by suffering. Suffering means not asking for help unless you absolutely cannot handle the task. Asking for help is a sign of defeat, so ring that bell infrequently lest you draw unwanted attention to yourself. Be willing to suffer. Spend time toiling over the problem. That’s how you learn.
I am not saying that you should never ask for help. I am saying that you should try to accomplish the task on your own first, and if you get stuck, then ask for help. Don’t simply ask for help every time without first trying to solve the problem yourself. Chances are, you’ll find that you could have figured it out on your own once you know the answer.
Never stop learning
Any engineer who claims that they don’t need to learn anything new is not someone with whom I’d like to work. In some careers, you can get away without learning anything new for years; technology changes too quickly to not pay attention. Your employer is paying you for your expertise and if that expertise goes stale, you become expendable. In order to be a great engineer you must first admit that you don’t know everything, and then you must seek out more knowledge in every way you can.
Identify someone in your current company or organization from which you can learn and attach yourself to him or her. Ask for advice on complex problems to see how they think. Show them solutions you've come up with and ask for a critique. If you can’t identify anyone in your organization that can serve as a mentor, then either you’re not looking hard enough or you’re at the wrong company. If you can’t grow in your current job then it’s time to look for another.
Read blogs. Attend conferences. Go to developer meetups. Great engineers never stop learning.
Share your knowledge
There are some who believe that their sole value is their knowledge, and by sharing that knowledge they therefore make themselves less valuable. Nothing could be farther from the truth. What makes you valuable is not your knowledge, it’s how you make use of your knowledge to create value for your employer. How better to create value from your knowledge than to share it with others?
I’ve interviewed at companies where hording knowledge seemed deeply-rooted at the organizational level. In that type of environment, a fierce competition develops between co-workers, and this opens the door to politics and backstabbing. I don’t want to work in an organization like that. You can’t learn if everyone is keeping information to themselves.
Great engineers want others to know what they know. They aren’t afraid of losing their position because someone else can do the same thing. Great engineers want to see their peers succeed and grow. Organizations rally around people who share information, and as they say in sports, people who make other people on the team better.
Lend a helping hand
Great engineers don’t consider any task to be “beneath” them. Always willing to lend a hand, you can see great engineers helping out junior engineers in between doing their own work. If something has to get done, and no one else is able to do it in time, great engineers volunteer to take on the work. They don’t scoff when asked to help on a project, whether it be small or menial or low-profile. Great engineers are team-focused and therefore are willing to do whatever it takes to help the team. Whether that be writing 1,000 lines of code or editing an image, great engineers jump at the chance to help out.
Take your time
Great engineers aren’t born, they are made. They’re made by following the advice in this post and by hard work. If you’re just starting out, there’s still plenty of time to become a great engineer. Patience is key. You don’t become a great engineer over night. For some it may take a couple years, for others it may take ten. There’s no one keeping score. Strong organizations recognize when someone has the potential to be a great engineer and will guide you along. You prove yourself through your work and how you make your team better. Focus and self-discipline go a long way towards becoming a great software engineer.
Discuss now why engineers fail ? This typically begins the conversation about the Peter principle. The Peter principle says that you’ll keep getting promoted until you finally end up in a job that you can’t do. This happens because the higher up in the organizational structure you move, the less your technical skills matter and the more your people skills matter. So whereas you began in a position that played to your strengths, you end up in one that plays to your weakenesses. This is precisely what Berkun found in his study, that designers were failing due to factors outside of their design skills. That is why designers fail. It’s also why software engineers fail.
software engineers, once they rise high enough in the organizational hierarchy, they need to learn how to work within the organizational structure. Oftentimes, that means gaining the
trust of business partners: engineers need to gain the trust of product managers. Gaining the trust of these business partners means being able to successfully negotiate, compromise, and work towards meeting a common goal without alienating people through your actions and speech. This is typically where people falter in their careers.
This year I’ve had to learn how to play the organizational game. I can honestly say it’s been far more challenging than anything I’ve done before. Dealing with people is much more difficult than dealing with technology, that’s for sure. You need to understand what each person responds to in terms of approach. Some people will easily cave when pressure is applied, others need to be convinced through logical argument while another set may require emotional persuasion. And of course, all of this must be done while making sure that all of these people still respect you and don’t feel manipulated.
Fortunately, my interest and research in social interaction has really helped me thusfar. Understanding what drives people and how to communicate effectively have been key to me. If you have aspirations of moving up in your company, then it would behoove you to also start researching these topics. The only way to really get ahead in business is a better understanding of people. Hard skill jobs such as engineers and designers are commodities that can easily be outsourced if necessary; soft skill jobs requiring you to work with and inspire others will always be in high demand and, as a bonus, can never be outsourced. Mastering people skills ensures employability, and more importantly, ensures that you won’t fail.
original post @ http://www.nczonline.net
Sunday, October 24, 2010
Why use OAuth ? what are the benefits ?
OAuth is open protocol to allow secure API authorization in a simple and standard method from desktop and web applications. In short, it means that a user of your service can provide you limited access to a third party account of theirs. OAuth is often described as a valet key that your users can give you to access their accounts on other services. For example, a user using Flickr (the service provider) would provide Snapfish (the consumer) with read only access to their Flickr account. This lets Snapfish access photos in the user's Flickr account so they can order prints.
It's all in the tokens
How does this happen without asking the user to give up their Flickr password? The flow would start by Snapfish obtaining a consumer key and secret and using them to generate an authorization link to Flickr. Once the user follows the authorization link, they are asked to log in on Flickr's site. Once logged in they can choose to grant Snapfish access to their Flickr account. Flickr then marks the request token as having been authorized by the user. Snapfish uses the request token to obtain an access token which can be used by to make requests to Flickr on behalf of the user. This diagram may help visualize it easier. C = Consumer, SP = Service Provider
How does this happen without asking the user to give up their Flickr password? The flow would start by Snapfish obtaining a consumer key and secret and using them to generate an authorization link to Flickr. Once the user follows the authorization link, they are asked to log in on Flickr's site. Once logged in they can choose to grant Snapfish access to their Flickr account. Flickr then marks the request token as having been authorized by the user. Snapfish uses the request token to obtain an access token which can be used by to make requests to Flickr on behalf of the user. This diagram may help visualize it easier. C = Consumer, SP = Service Provider
Generating a valid OAuth request
It turns out that generating an OAuth request is very simple but debugging it is a pain. Every OAuth request contains certain parameters. These include:
- oauth_consumer_key
- oauth_token
- oauth_nonce
- oauth_timestamp
- oauth_signature method
- oauth_version
- oauth_signature
These can be passed in as GET or POST parameters or in the Authorization header. You'll most likely be passing in other additional parameters based on the API you're accessing. I think it's enough to understand OAuth and why we used it frequently. More details are available on http://oauth.net/documentation/getting-started
Friday, October 22, 2010
PHP Application Framework Battle ... CodeIgniter vs. Symfony
I'm familiar with many of the PHP frameworks like symfony,zend,cakePHP,codeigniter and last one which i think it will be good enough specially to design rich web and large scale applications, that is YII http://www.yiiframework.com/features/ at last, I would like share a funny framework built in PHP and that's source code you can even tweet :P checkout this one http://twitto.org/ :-)
We have seen on web that there are so many solutions but i notice that people always ask which ones is good ? Does such a framework exist who do all jobs well ? Usually my first answer is .. your question is a bit like going to the hardware store and having a conversation like:-
You: I'd like to buy some tools. Staff Member: Ok, great. What are you building? You: Don't know yet. But I'm sure I'll need some tools.
Second it's obviously that there are so many metrics of what good can possibly mean like, disk size on server, amount of code generated for the client, difficulty of installing / configuring process on the server, etc...
There is absolutely no point in solving a problem until you have a problem. Just code vanilla PHP until you decide some particular task is too hard/messy/etc and a framework will actually help you and, this may go against the "must-have-framework" crowd, but honestly I think for trivial tasks you're typically better rolling your own.
So again ball is on your side .. what are your problems and why you need it ? would you like fancy Web 2.0 site ? or scalability is major concern as your site hits ? Most likely in the form of a big complex MVC framework with plenty of layers that abstracts away your database, your HTML, your Javascript and in the end your application itself. If it is a really good framework it will provide a dozen things you'll never need. I am obviously not a fan of such frameworks. I like stuff I can understand in an instant. Both because it lets me be productive right away and because 6 months from now when I come back to fix something, again I will only need an instant to figure out what is going on.
Why MVC ? I don't want make you confuse but i found that it's up to you how much you would like build your application scalable and modular like security and all features that's you want to apply.. see my previous one posts ..
http://ravirajsblog.blogspot.com/2008/12/world-of-object-oriented-programming.html
http://ravirajsblog.blogspot.com/2008/12/principles-of-mvc-design-pattern.html
http://ravirajsblog.blogspot.com/2008/12/3-tier-architecture.html
I like MVC but Just make sure you avoid the temptation of creating a single monolithic controller. A web application by its very nature is a series of small discrete requests. If you send all of your requests through a single controller on a single machine you have just defeated this very important architecture. Discreteness gives you scalability and modularity.
You can break large problems up into a series of very small and modular solutions and you can deploy these across as many servers as you like. You need to tie them together to some extent most likely through some backend datastore, but keep them as separate as possible. This means you want your views and controllers very close to each other and you want to keep your controllers as small as possible.
So design your goals first.
1. Clean and simple design
* HTML should look like HTML
* Keep the PHP code in the views extremely simple: function calls, simple loops and variable substitutions should be all you need
2. Secure
* Input validation using pecl/filter as a data firewall
* When possible, avoid layers and other complexities to make code easier to audit
3. Fast
* Avoid include_once and require_once
* Use APC and apc_store/apc_fetch for caching data that rarely changes
* Stay with procedural style unless something is truly an object
* Avoid locks at all costs
Now come to title of post .. one to one "CodeIgniter vs Symfony" comparison .. overall, symfony is more sophisticated (ie. harder to use ... it has 7 times more codebase than codeigniter) while codeIgniter seems more geared to low level of programming and environment but yes symfony has complete features of set that makes it much stronger and scalable.
Finally i summarized and here are few my concerns that you should keep in mind while choosing a framework.
1. choose PHP version > 5.3
In PHP, an object is destroyed when it goes out of scope. This is normally when the script stops executing or when the function it was created within ends. unset($my_variable); we need to be fine allowing the destructor to handle closing the DB in most situations. The garbage collector or GC in PHP will do all the work you need once the variable goes out of scope or there are zero references to it. To be clear, the GC got a major rework in PHP 5.3.
2. Use PHP's PDO for model http://php.net/pdo
3. Use open source for rich UI work like http://developer.yahoo.net/yui/
4. Use PHP's pecl/filter extension for automagically sanitize all user data
5. Use auto include configuration setting in PHP to avoid include one and required once see http://php.net/manual/en/ini.core.php#ini.auto-prepend-file
6. Use PHP's inbuilt APIs for opt code caching http://php.net/apc_fetch
Clean separation of your views, controller logic and backend model logic is easy to do with PHP. Using above ideas, we should be able to build a clean framework aimed specifically at our requirements instead of trying to refactor a much larger and more complex external framework.
Many frameworks may look very appealing at first glance because they seem to reduce web application development to a couple of trivial steps leading to some code generation and often automatic schema detection, but these same shortcuts are likely to be your bottlenecks as well since they achieve this simplicity by sacrificing flexibility and performance. Nothing is going to build your application for you, no matter what it promises. You are going to have to build it yourself. Instead of starting by fixing the mistakes in some foreign framework and refactoring all the things that don't apply to your environment spend your time building a lean and reusable pattern that fits your requirements directly. In the end I think you will find that your homegrown small framework has saved you time and aggravation and you end up with a better product.
We have seen on web that there are so many solutions but i notice that people always ask which ones is good ? Does such a framework exist who do all jobs well ? Usually my first answer is .. your question is a bit like going to the hardware store and having a conversation like:-
You: I'd like to buy some tools. Staff Member: Ok, great. What are you building? You: Don't know yet. But I'm sure I'll need some tools.
Second it's obviously that there are so many metrics of what good can possibly mean like, disk size on server, amount of code generated for the client, difficulty of installing / configuring process on the server, etc...
There is absolutely no point in solving a problem until you have a problem. Just code vanilla PHP until you decide some particular task is too hard/messy/etc and a framework will actually help you and, this may go against the "must-have-framework" crowd, but honestly I think for trivial tasks you're typically better rolling your own.
So again ball is on your side .. what are your problems and why you need it ? would you like fancy Web 2.0 site ? or scalability is major concern as your site hits ? Most likely in the form of a big complex MVC framework with plenty of layers that abstracts away your database, your HTML, your Javascript and in the end your application itself. If it is a really good framework it will provide a dozen things you'll never need. I am obviously not a fan of such frameworks. I like stuff I can understand in an instant. Both because it lets me be productive right away and because 6 months from now when I come back to fix something, again I will only need an instant to figure out what is going on.
Why MVC ? I don't want make you confuse but i found that it's up to you how much you would like build your application scalable and modular like security and all features that's you want to apply.. see my previous one posts ..
http://ravirajsblog.blogspot.com/2008/12/world-of-object-oriented-programming.html
http://ravirajsblog.blogspot.com/2008/12/principles-of-mvc-design-pattern.html
http://ravirajsblog.blogspot.com/2008/12/3-tier-architecture.html
I like MVC but Just make sure you avoid the temptation of creating a single monolithic controller. A web application by its very nature is a series of small discrete requests. If you send all of your requests through a single controller on a single machine you have just defeated this very important architecture. Discreteness gives you scalability and modularity.
You can break large problems up into a series of very small and modular solutions and you can deploy these across as many servers as you like. You need to tie them together to some extent most likely through some backend datastore, but keep them as separate as possible. This means you want your views and controllers very close to each other and you want to keep your controllers as small as possible.
So design your goals first.
1. Clean and simple design
* HTML should look like HTML
* Keep the PHP code in the views extremely simple: function calls, simple loops and variable substitutions should be all you need
2. Secure
* Input validation using pecl/filter as a data firewall
* When possible, avoid layers and other complexities to make code easier to audit
3. Fast
* Avoid include_once and require_once
* Use APC and apc_store/apc_fetch for caching data that rarely changes
* Stay with procedural style unless something is truly an object
* Avoid locks at all costs
Now come to title of post .. one to one "CodeIgniter vs Symfony" comparison .. overall, symfony is more sophisticated (ie. harder to use ... it has 7 times more codebase than codeigniter) while codeIgniter seems more geared to low level of programming and environment but yes symfony has complete features of set that makes it much stronger and scalable.
Finally i summarized and here are few my concerns that you should keep in mind while choosing a framework.
1. choose PHP version > 5.3
In PHP, an object is destroyed when it goes out of scope. This is normally when the script stops executing or when the function it was created within ends. unset($my_variable); we need to be fine allowing the destructor to handle closing the DB in most situations. The garbage collector or GC in PHP will do all the work you need once the variable goes out of scope or there are zero references to it. To be clear, the GC got a major rework in PHP 5.3.
2. Use PHP's PDO for model http://php.net/pdo
3. Use open source for rich UI work like http://developer.yahoo.net/yui/
4. Use PHP's pecl/filter extension for automagically sanitize all user data
5. Use auto include configuration setting in PHP to avoid include one and required once see http://php.net/manual/en/ini.core.php#ini.auto-prepend-file
6. Use PHP's inbuilt APIs for opt code caching http://php.net/apc_fetch
Clean separation of your views, controller logic and backend model logic is easy to do with PHP. Using above ideas, we should be able to build a clean framework aimed specifically at our requirements instead of trying to refactor a much larger and more complex external framework.
Many frameworks may look very appealing at first glance because they seem to reduce web application development to a couple of trivial steps leading to some code generation and often automatic schema detection, but these same shortcuts are likely to be your bottlenecks as well since they achieve this simplicity by sacrificing flexibility and performance. Nothing is going to build your application for you, no matter what it promises. You are going to have to build it yourself. Instead of starting by fixing the mistakes in some foreign framework and refactoring all the things that don't apply to your environment spend your time building a lean and reusable pattern that fits your requirements directly. In the end I think you will find that your homegrown small framework has saved you time and aggravation and you end up with a better product.
Subscribe to:
Posts (Atom)