A Tale of Optimization (part 2)

Ronnie Mukherjee 0 Comments

Click here for Part 1

Chapter 3: The Select n + 1 Problem

The Select n + 1 problem describes a situation where you are hitting the database many times unnecessarily, specifically once for each object in a collection of objects. As I mentioned in part 1, I had been struggling to diagnose a performance problem with an operation which involved retrieving several thousand objects from the database. My NHibernate log file showed me that this single NHibernate request was resulting in thousands of database queries being executed, one for each object in my collection. So why was this happening?

Below is a simplification of my class structure, starting with my root class, Equipment.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
public class Equipment
{
   public virtual IList<Asset> Assets { get; set }
}
 
public class Asset
{
   public virtual Location Location { get; set; }
}
 
public class Location
{
   public virtual IList<PositionPoint> PositionPoints { get; set; }
}
 
public class PositionPoint
{
   public virtual double X { get; set; }
   public virtual double Y { get; set; }
}
public class Equipment
{
   public virtual IList<Asset> Assets { get; set }
}

public class Asset
{
   public virtual Location Location { get; set; }
}

public class Location
{
   public virtual IList<PositionPoint> PositionPoints { get; set; }
}

public class PositionPoint
{
   public virtual double X { get; set; }
   public virtual double Y { get; set; }
}

Each instance of Equipment has a collection of Assets, each of which has a Location, which in turn has a collection of PositionPoints. The problem this structure presents to NHibernate is that the root class has a collection of objects in a one-to-many relationship, each of which has another collection of objects in another one-to-many relationship. My mapping classes had been set up to explicity turn off lazy loading for Assets, Locations and PositionPoints, therefore NHibernate was obliged to find a way to fetch all this data, and it chose to do this by first retrieving the data for Equipments, Assets and Locations, and then executing a single query for each Location to retrieve all of its PositionPoints.

I couldn’t remember why exactly I had turned lazy loading off for these relationships (perhaps I should have commented my mapping file with an explanation). Therefore I modified the mapping file to turn lazy loading back on. As expected this solved the Select n + 1 problem, as NHibernate was no longer obliged to fully populate Locations and PositionPoints. However, this change caused an exception to be thrown in the business layer, a LazyInitializationException. This was caused by logic in the business layer which was attempting to read the PositionPoints property of a Location after the session which originally obtained the root objects had been closed. Indeed this exception may well have been the reason I had previously decided to turn lazy loading off for these objects. So the idea of using lazy loading was not a viable solution, at least not without some other changes being made. A little research around the lazy initialization problem led me to the idea of injecting my NHibernate session into the data access layer from the business layer, allowing me to use the same session for the lazy loading, but I really didn’t want my business layer to know anything about database sessions.

I reverted my code to switch lazy loading back off and continued to investigate my original problem. I tried instructing NHibernate to eagerly load my objects using a HQL query to eagerly fetch associated data, but this resulted in a cartesian product issue, where the returned collection contained duplicate objects.

Then I discovered a page on ayende.com on NHibernate Futures.

Chapter 4: Futures

NHibernate Futures are a variation on the MultiCriteria feature, which allow us to combine queries to eagerly load one-to-many associations in exactly the way I needed. I would have to define a Future query to retrieve all of my Equipments, then another to retrieve my Assets, and another to retrieve my PositionPoints. These queries would then be combined my NHibernate to retrieve and combine all the required data in a single roundtrip. Finally it seemed like I had found a solution to my problem. I modified my code to use Future queries and tested it.

But it didn’t work!

Stepping through the code and checking my log file revealed that each Future query was causing a trip to the database. Future queries are not supposed to result in an immediate trip to the database, execution should be delayed until the last possible moment.

Again I had hit a brick wall. So again I started googling for answers.

After some time I very fortunately stumbled upon an explanation – NHibernate Future queries do not work with an Oracle database. This was disappointing.

Chapter 5: Getting Futures to Work with Oracle

So now I had reached a point where I had discovered an NHibernate feature which would seemingly allow me to eagerly populate my collection of objects in an efficient way. But it wasn’t supported with Oracle. I did however discover a method of getting Futures to work with Oracle on stackoverflow.

I would need to extend NHibernate’s OracleDataClientDriver and BasicResultSetsCommand classes. I followed the instructions, and updated my NHibernate config file to use the new driver. I reran my code using Futures, and it worked! All of my data was returned in a single trip to the database! But it wasn’t quick. In fact it was very slow. The whole point of this was to try to optimize my code. The Select n + 1 problem seemed to be an obvious reason for its slowness. I had solved that problem. But my code was still slow. Why? The reason was that the solution I had found on stackoverflow to get NHibernate Futures to work with Oracle used cursors. And cursors are slow. The built-in Futures feature results in SQL which does not use cursors. I had found a workaround but it wasn’t a good solution for my problem. Yet again I felt like I was back to square one.

Chapter 6: Rethinking My Approach

Having gone down various rabbit holes and tried a number of potential solutions, it was time now to take a step back from the problem.

What had I learnt?

I needed to obtain a large collection of Equipments, and their associated Assets. The operation was too slow because of a Select n + 1 problem. I needed to read PositionPoint data in the business layer.  I couldn’t lazy load this data because of a LazyInitializationException. I couldn’t use NHibernate Futures because the result was still too slow (with the Oracle workaround at least).

But what exactly did I need to use my PositionPoints for? I reviewed my business layer code and then it hit me. Of the several thousand Equipments and Assets I was retrieving, I only actually needed to access the PositionPoints of a small number of them! Less than ten in fact. Therefore if I turned lazy loading back on, which would result in a fast select query to obtain my root objects, I could identify in my business layer which objects I actually needed to access PositionPoint data for, and hit the database again (using a new session), just for those particular objects.

A few minutes of coding later and at last, I had my solution. The operation which had previously been taking around one and a half minutes was now taking around 30 seconds – an acceptable level of performance.

Conclusions

Looking back on this journey, I must admit I feel a little stupid. I had been looking at the problem in the wrong way. I had assumed that my approach was correct, and that I needed to optimize my use of memory or NHibernate, when in fact it was my algorithm which was inefficient. This is the main lesson I will try to take from this experience. When faced with a database performance issue, first review your algorithm and consider whether you are retrieving data you don’t actually need, particularly when using an ORM framework. There are also a few other things I will take away from this. Perfview is a great tool which I am sure will use again. NHibernate logging is an equally valuable tool for understanding what is going on under the hood. And it remains a mystery how anyone ever coded before the existence of the Internet!

A Tale of Optimization (part 1)

Ronnie Mukherjee 0 Comments

trying to get an award

I intended for this article to be contained within a single post, but it turned out to be too long for that. Click here for part 2.

Introduction

Over the past couple of days I have been on quite a journey in trying to optimize a method in my current project. I’ve known this operation was slow for the past few months, but it has never really been a priority to address the issue.

On my daily commute I often listen to podcasts, my favourites being .NET Rocks, Radiolab and The Guardian Football Weekly. Earlier this week I listened to a .NET Rocks episode titled Making .NET Perform with Ben Watson. Watson is the author of Writing High Performance .NET Code and during the show he and the guys discussed the topic of performance optimization from a number of different angles. The show inspired me to finally look into this annoying performance issue which had been on my radar for months. As I was listening I contemplated the problem, and in particular took an interest in a discussion on a favourite tool of Watson’s: PerfView. I had never heard of PerfView, but it sounded like the perfect application to help me understand my performance issue, and if nothing else offered the chance to try out a new tool. Apparently it was free and lightweight – two characteristics I always love in a development tool.

Chapter 1: Adventures in PerfView

Later that day, sitting at my desk, I downloaded PerfView and read through its tutorial. What a great tool! I had previously used Red Gate’s ANTS Performance Profiler, admittedly to a limited extent, but PerfView seemed easier to use, just as capable and a great deal more lightweight. Essentially PerfView helps you to explore two aspects of your application – CPU usage and memory allocation. My problem operation involved retrieving several thousand rows of data from an Oracle database, to automatically populate a collection of complex C# objects, using NHibernate. I had a hunch that it was the complexity of each object, with several layers of inheritance and multiple associations, that was the problem. I was perhaps slightly biased having just heard Watson emphasise the importance of memory allocation in .NET applications and how slowness was often a result of memory issues. Indeed, the PerfView tutorial states:

If your app does use 50 Meg or 100 Meg of memory, then it probably is having an important performance impact and you need to take more time to optimize its memory usage.

So I loaded my application in Visual Studio, paused execution with a breakpoint and used PerfView to take a snapshot of the heap, when I knew my large collection of objects would be in memory. I discovered that although IIS express was indeed using over 100MB of memory, only a fraction of this (around 10%) was being used by the heap. So maybe memory allocation wasn’t the problem at all?

Next I decided to use PerfView to analyse the CPU usage during my long operation. In total the operation was taking around one and a half minutes. I ran an analysis and was not surprised to find that the bulk of this time was taken up in the data layer, retrieving data from the database and implicitly populating my collection of several thousand objects. This was just as I had feared. Would I have to redesign my database and class structure to remove some layers of complexity? This would be a huge task. However, on closer inspection, I realised that although over 80% of the CPU usage during this operation was taken up inside my data retrieval method, the total CPU usage time was in fact only 15 seconds or so. Surely this could mean only one thing – it must have been the database query which was taking so long, which suprised me as several thousand rows is of course not much to ask of a database.

Chapter 2 – NHibernate Logging

This project is the first time I have used NHibernate. While I think I can see its benefits, or rather the benefits of using an ORM tool in general, I am not totally convinced. I come from a traditional background of writing and calling my own stored procedures, and miss that complete control of how and when the database is called. There have been a few times when I have wrestled with NHibernate to achieve the desired results, but perhaps this is just a part of getting to grips with it and learning how to use it to my advantage. In any case, having concluded that the problem involved my database query, I wanted to know exactly what query was being executed. After some googling I found that I could use NHibernate logging to obtain this query, by adding a couple of lines to my web.config file.

Using breakpoints to isolate the data access method, I was able to examine my log file to obtain the database query in question. It was indeed a very complex query, with many joins corresponding to the multiple layers of inheritance defined in my class and database structure. However, I noticed that stepping from my data retrieval line of code to the next line was in fact pretty quick, less than 5 seconds in fact. Copying and pasting the cumbersome SQL query into Oracle SQL Developer and executing it from there confirmed that the query itself was indeed very fast, despite its complexity. So my assumption was proved wrong again. It was not memory allocation that was the problem, it was not my data retrieval query, yet it was not CPU usage that was taking up so much time. So what was it? I hit F5 to continue execution from my breakpoint, let the operation complete, and then reexamined my NHibernate log file. To my surprise I discovered that the database had been hit several thousand times, running a relatively simple query each time to populate a collection property on one of my classes. It seemed that, without my knowledge, I had fallen victim to the Select n + 1 problem.

Click here for part 2.

Advice to My Younger Self

Ronnie Mukherjee 0 Comments

grandpa dj

In professional terms, at the age of 33 I am still relatively young. I have more of my career ahead of me than behind me. Nevertheless, when I look back, I can see that my perspective has changed considerably. With this in mind, I thought I would consider the question: if I could give some professional advice to myself as a fresh-faced graduate entering a career in programming, what would that be? This post is my answer to that question.

At the start of a project, things always look rosy

One of the most satisfying parts of life in software is beginning new projects. You have a blank sheet of paper upon which you can create the greatest system ever developed. The possibilities are endless. You will get everything done on time, under budget, and you will be a hero. It is difficult to avoid this kind of wishful thinking at the start of a project, in fact such optimism is a good thing in some respects. All great achievements start with a lofty vision. However, without being too pessimistic or miserable, I believe it is important to temper that early enthusiasm with a dose of realism. There will be difficulties, disagreements and unexpected obstacles. People will over-promise and under-deliver. Some things will take much longer to complete than expected. This is just how projects unfold. If we acknowledge this reality from the outset, we are more prepared for difficulties, even if only on a subconscious level. This is something I have learnt from experience. Things never run completely smoothly. One common mistake that people make is to believe that by following a particular methodology or project management method, difficulties can be largely eliminated. This is simply not the case. There will be difficulties, and a degree of stoicism is required to handle and overcome these difficulties.

Time and experience are the best teachers

Early on in my career, I was desperate to make progress. I could see that the people around me were better than me and I wanted to bridge that gap as quickly as possible. I studied and tried hard but just couldn’t move forward at the speed I wanted to. Now I see that the reason those around me were ahead of me was simply that they were more experienced. There is a reason that job descriptions tend to require a certain number of years of experience. Reading a book or completing a tutorial is incomparable to real experience. In addition, personal and professional development take time – not only the hours spent at work, but the years spent developing an overall picture of work, people and life. As you encounter and overcome obstacles, your brain forms new connections which make sense of those problems and prepare you for their reoccurence in the future. There is no other way to form these connections than to gain experience and wait. Success in my opinion is essentially about learning to solve problems, whether the problem is a bug in your code or a difficult colleague. You can read about bug-fixing or relationships, and this can help to some extent, but to really develop you need to face and overcome these problems.

Satisfaction at work is down to the people around you

As a junior programmer, I greatly underestimated how important people are to your level of satisfaction at work. From your peers to your boss to your customers, the people around you are the biggest influence to your day-to-day levels of job satisfaction. You can be faced with a failing project or a seemingly insurmountable snag list, but if the people around you are intelligent, positive and understanding, you will be able to cope and learn. Equally, you can have access to all the latest tools and technologies, and use them to design and deliver a brilliant system, but if you are working with difficult people, you won’t enjoy the experience. It is easy to think of life in programming consisting simply of a programmer and his machine, joining forces to conquer the world (or at least his project). Indeed, programmers are perceived stereotypically as geeks, because they are seen as lacking in social skills. This stereotype comes from the fact that introverts are often attracted to computers as a possible alternative to having to deal with actual people. I see myself as something of an introvert and this is possibly what drew me to computers initially, but there is simply no escaping the fact that you can’t get  very far alone. The good news is that human relationships offer rewards far greater than anything offered by a machine, and the real value of a successful project comes from sharing satisfaction with your colleagues and your customers.

Don’t just code

As I have progressed in my career, I have learnt that actually writing code is a small part of being a good programmer. A much more important skill is the ability to think well. That is, the type of thinking required to take a vague task or problem and turn it into a plan of action. Sometimes we need to step back and perform a ‘brain dump’ of everything on our minds. We need to learn to capture, organize and prioritise ideas, and also to let go of some of our desires. For example, upon experiencing a desire for our code to be supported by unit tests, a seemingly reasonable next step would be to start writing unit tests. But we need to learn to view that desire relative to the bigger picture, to make sacrifices and realise that we can’t achieve everything. As much as we would like our code to be faster, is performance optimization the best way we could be spending our time right now? The only way to effectively consider such questions, I have found, is to stop coding for a while and start thinking. Make a list on paper, in notepad, or on a whiteboard. Draw a mind map, be messy, write down your thoughts. Think about how much time you have, make some estimates, make some sacrifices and decide upon the best course of action. Then go ahead and start coding in the right direction.

I hope you have found something interesting or useful in this post, particularly if you are new to programming. I have no doubt that in ten years time, my view will be quite different. As we progress through our careers and our lives, our experiences will inevitably reshape our views. It would be nice to know what my future self would advise me right now, but I guess I’ll just have to wait and see.

 

Pareto Programming

Ronnie Mukherjee 0 Comments

Pareto principle - the key is in the twenty percent

One of my favourite principles is the Pareto Principle, also known as the 80-20 rule. The principle states that frequently, around 80% of the effects come from 20% of the causes. I love this principle because it is so simple, and it can be observed and applied to so many different areas in our professional and personal lives. It is powerful enough to fundamentally change the way we approach our work, and encourages us to accept that the world we live in is unbalanced. Things aren’t evenly distributed.

The principle is named after an Italian economist named Vilfredo Pareto, who observed that 80% of the land in Italy in 1906 was owned by 20% of the population. More recently, a report produced by the UN in 1992 stated that the richest 20% of the population of the world controlled around 80% of the world’s income. The principle can also seen in nature, indeed Pareto is said to have observed that 80% of the peas in his garden were produced by 20% of the pods.

But how can the Pareto Principle help us to produce better software?

Here are some ways we can apply the Pareto Principle at work to ensure we are spending time and money where it will have the greatest impact, and to accept that imbalance exists and there is no point pretending otherwise.

80% of value is delivered by 20% of requirements

This idea is particularly useful at the start of a project. Generally speaking the initial functional specification in any software project is produced from an optimistic point of view. What developers and customers often fail to realise is that around 80% of value, whether we measure that in terms of customer satisfaction or profits, will be delivered by 20% of the requirements contained in that specification. This doesn’t mean that the other 80% of requirements are pointless or a waste of time, it just means that they are less important. For this reason it is crucial to prioritize requirements. This may sound obvious, but it is surprising how little time is often dedicated to prioritization. The false assumption made at the start of a project is that we will have time to do everything, therefore it doesn’t really matter what we do first. The reality is of course that we will be faced with unexpected obstacles, therefore it pays to get the most valuable stuff done first. If we then don’t have time to do everything, at least we will not be ommitting any of the vital 20%.

80% of errors and bottlenecks will be contained in 20% of the code

As a codebase ages it is common for developers to fantasise about ripping it up and starting over. It seems to be riddled with bad code and inconsistencies. It is slow and difficult to read. The reality is that we just like to start with a blank page as this is more exciting for most people than having to drag an old codebase forwards. In truth, by rewriting 20% of the code we can remove 80% of the solution’s ‘badness’. This does not mean that we should indeed go ahead and rewrite 20% of the code, it just means that we shouldn’t assume it would be best to start again. Again we need to prioritize. If we are determined to improve things, we should work on identifying where the bad 20% is, and decide whether improving that will be enough.

80% of your productivity will be delivered in 20% of your time

As desirable as it might be, you simply can not function at 100% productivity, 100% of the time. Creativity tends to come in bursts and learning to recognise when you are feeling creative and when you are not is a valuable skill. Expecting yourself to constantly produce brilliant work is a recipe for stress. It might appear to work for a period of time, but soon your body will force you to stop and take your foot off the gas. Luckily, in any job, there are lots of menial tasks which need doing and are perfect for those times when we are not at our most creative. This might be writing some code which is not challenging, adding comments or reviewing your work or someone else’s. One technique I have used in the past is to categorise items on my to-do list as ‘creative’ or ‘menial’, and to choose tasks based on how creative I am feeling. Humans function in cycles whether you like it or not, and by learning to work within these cycles you can maximise your overall levels of productivity in a sustainable way.

80% of the value of a team is in 20% of its members

This may be controversial, but in every project I have worked on, success has depended primarily on the abilities of a small portion of the team. There are generally a couple of ‘stars’, usually including the team leader, who are there for other team members to lean on and whose experience and skill carries the project over the line. If that 20% of the team were to leave, the project would probably fail, whereas much of the rest of the team are more easily replaceable. These are the team members who have the greatest overall understanding of the project, allowing them to intuitively make important decisions and identify potential pitfalls. This does not mean that the other 80% of the team are useless. A good team needs reliable workers who can take designs created by others and make them a reality. There have certainly been times in my career when I have been part of that 80% of the team.

I really believe that understanding the Pareto Principle and its ramifications is crucial in software development. Whether we are considering requirements, bugs, team members, tasks or customers, all things are not equal. By acknowledging and accepting this we can adjust our behaviour according to the relative importance of a particular thing, and embrace imbalance rather than deny it.

Keep Your Blood Flowing

Ronnie Mukherjee 1 Comment

docs_thumbs_up

One of the drawbacks of life as a programmer, along with most other professions, is that we are required to spend most of our day sat at a desk. There have been a lot of studies indicating that this is not good for our health, and common sense tells us the same. Sitting down for hours on end leads to tension, which is not only detrimental to our health, but also to the quality of our work and our vitality. Computers have a way of drawing you in, to the point where you become so engrossed in your task that you forget about everything else, including your body. It is only at the end of the day, when you finally leave the office that you realise how stiff your shoulders are and how stressed you feel.

I have struggled with this problem to varying degrees throughout my career, and have experimented with various remedies to address the issue, from ergonomic keyboards and footrests to exercise and meditation. Most of these measures have helped me to some extent, but one simple technique which I have been using for the past few months seems to have really had an effect, so I thought I would share it with you. It is this:

Every half an hour, I get up and out of my seat.

My intuition has always told me that it is important to take regular breaks, but I have only recently realised that getting up and doing something makes those breaks more effective in relaxing and refreshing my muscles and mind.

I have tried using the Pomodoro technique in the past, and much of it didn’t suit me. I never felt any benefit from logging interruptions, and I found its method of task management too simple, however the one rule that did stick was to work in 25 minute bursts, and I have been doing this for several years. I have noticed that by working in this way I am able to maintain a better level of focus during the time when I am actually working. Only recently however have I added the rule of getting up out of my seat after every ‘pomodoro’.

So what do I do when I get up? I might go to the toilet, go and get myself a glass of water, go and look out of the window, go wash a cup, or if no one is watching just stand up and do a few stretches. The Pomodoro technique recommends taking a break of 3 to 5 minutes between each pomodoro, which is what I used to do, but a better rule I find is to just make myself get up out of my seat, even if only for a minute or so. Once I am up I often realise that I need a slightly longer break. If I stay in my seat, I no longer count that as taking a break.

I use the online pomodoro timer at tomatoi.st, which is nice and simple and also shows me how many pomodoros I have completed each day, but there are many pomodoro websites and apps to choose from. A stopwatch would work just as well.

I’m not completely sure why this technique works for me, but it does. In my mind it is a case of keeping the blood flowing, which relieves tension and refreshes the mind. If I sit for hours I notice that I lose mental sharpness and begin to struggle to make progress on problems. If we think a little about how our bodies have evolved, we are clearly not built to remain in the same position for long periods of time. Taking a walk at lunch time is another technique which I frequently use. Again, it just gets the blood flowing.

Applying this technique is probably only going to work for you if you feel there is actually a problem there that needs solving. I recently started to experience neck and shoulder pain, not for the first time, and I am certain this was related to my work. I tried an ergonomic keyboard for a few days, but if anything this only seemed to make matters worse. At the same time, I bought a footrest, which does seem to have helped me adopt a more comfortable sitting position, but it is getting up out of my seat frequently which I am sure has made the biggest difference.

There are other measures I am taking to minimize the effects of sitting for long periods.

One conclusion I have come to is that a stronger core would help me to sit correctly, therefore I am looking at how I can modify my exercise routine accordingly. I’ve tried Pilates which initially struck me as a slightly feminine form of exercise, but it does appear to be very effective at isolating the core and strengthening the stabilizing muscles between your shoulder blades. I attended a Pilates class last week and it was certainly not easy. I also get a full body massage every month.

The key message here then is to be aware of the need to keep your blood flowing if you sit at a desk all day. You may not need to get up every 30 minutes, but consider getting up more often than you currently do, particularly if you feel like you are getting nowhere with your work or you are starting to feel tense. If this is a real problem then I would recommend trying an approach similar to mine. Use a timer and make yourself get up periodically. You may be surprised at just how effective this is.

Five Lessons From JavaScript: The Good Parts

Ronnie Mukherjee 0 Comments

js_good_parts

I have just finished reading JavaScript: The Good Parts by Douglas Crockford. It is a quite illuminating book, from which I learnt a number of interesting and useful lessons about JavaScript. I have chosen five to share with you below. If you are interested in JavaScript, you might find something useful here and I would strongly encourage you to buy and read the book yourself.

undefined and null

If I had been asked what the simple types in JavaScript were before reading this book, I would have replied with number, string and boolean. In fact there are two more simple types in JavaScript: null, and undefined. Having worked with JavaScript for many years, I have of course encountered null and undefined many times, but had never really considered their types, or that they are indeed simple types themselves.

So what is the difference between undefined and null?

When you try to retrieve the value of a property or variable which has either not been assigned a value, or which has not been declared, you will receive the undefined value.

The null value must be explicitly assigned to a property or variable before it is encountered.

However, matters are complicated by the fact that the expression (null == undefined) returns true, which means that in the following code, our alert is raised.

1
2
3
4
5
var notAssigned; // will return undefined
 
if(notAssigned == null){
  alert(‘it is null!);
}
var notAssigned; // will return undefined

if(notAssigned == null){
  alert(‘it is null!’);
}

…which brings us to the two sets of equality operators.

== and ===

For programmers like myself who come from an object-oriented language background, it is easy to fall into the trap of making a lot of assumptions about the JavaScript syntax without really taking the time to check those assumptions.

An easy mistake to make is to assume that the “double equals” equality operator == has the same basic meaning as in JavaScript as it does in C#. However, there is an important difference.

In C#, the double equals will only return true if the two operands either point to the same reference (for reference types) or contain identical values (for value types). In JavaScript, the double equals can return true even if the two operands are of different types. The reason for this is that when the double equals is presented with operands of different types, it will attempt to convert one of the operands to the type of the other, and compare the result. Below are examples taken directly from the book, which demonstrate some of the strange consequences of this behaviour.

1
2
3
4
5
6
7
8
9
'' == '0' // false
0 == '' // true
0 == '0' // true
false == 'false' // false
false == '0' // true
false == undefined // false
false == null // false
null == undefined // true
' \t\r\n ' == 0 // true
'' == '0' // false
0 == '' // true
0 == '0' // true
false == 'false' // false
false == '0' // true
false == undefined // false
false == null // false
null == undefined // true
' \t\r\n ' == 0 // true

The other equality operator offered by JavaScript is the triple equals, ===. This displays behaviour which is more like what we would expect. It does not attempt to do any conversions when presented with operands of different types, it just returns false. So in each of the examples listed above, false would be returned.

The double equals operator is identified by Crockford as one of the bad parts of JavaScript, and he advises against using it under any circumstances.

Objects are Containers of Properties

Having confirmed that all values in JavaScript are either simple types or objects, an “aha” moment for me was reading that all objects in JavaScript are simply containers of properties. It’s a satisfying feeling being able to abstract a seemingly complex or unclear concept into a simple model (isn’t this in fact the whole purpose of science?). From my experience with JSON and object literals, I was quite familiar with the concept of properties and values in JavaScript. However it had never dawned on me that objects are simply containers of properties, with each property consisting of a name and a value. That value can of course be another object, which contains its own properties, and so on. Objects in C# are more complicated . A C# object can have a variety of different types of members, including methods, events, delegates and properties. Furthermore, each of these members is associated with a visibility level, such as ‘public’ or ‘private’. Behaviour according to differing levels of visibility can be emulated in JavaScript, as I discussed in a previous post, however this feature is not built in to the language. I find the fact that objects are such simple constructs an almost beautiful feature of JavaScript.

Another important feature of objects is the prototype linkage feature, but that is probably a topic for a separate blog post.

Functions are Objects

Functions in JavaScript are themselves objects, which means, as we have seen, that they are simply containers of properties. How is a function simply a container of properties? In a nutshell it has hidden properties for the function’s context and its enclosed statements. The important difference between a function and any other object is that a function can be invoked.

The fact that functions are objects means that they can be assigned to a variable, stored in an array, passed as an argument to a different function, or used in any other way that a ‘regular’ object might be used.

An interesting aspect of functions in JavaScript is the value of this inside of a function, which I discussed in a previous post.

Function Scope in JavaScript

This is something I feel I really should have known about JavaScript, but I must confess I didn’t.

Variables in JavaScript have function scope, unlike C# which has block scope. What this means is that when you declare a variable in JavaScript, it is accessible by any code within the same function, even if that code exists outside of your current block, as defined by your curly braces. This is probably best explained by an example:

1
2
3
4
5
6
7
8
var myFunction = function()
{
  if(10 > 5)
  {
    var message = ‘hello there!;
  }
  alert(message); // alerts 'hello there!'
}
var myFunction = function()
{
  if(10 > 5)
  {
    var message = ‘hello there!’;
  }
  alert(message); // alerts 'hello there!'
}

If JavaScript had block scope, like C# and Java do, then invoking myFunction would cause undefined to be alerted. In fact our message ‘hello there!’ is alerted.

For reasons of clarity Crockford advises us to always declare our variables at the top of the containing function. So our code would then look like this:

1
2
3
4
5
6
7
8
9
10
11
var myFunction = function()
{
  var message;
 
  if(10 > 5)
  {
    message = ‘hello there!;
  }
 
  alert(message);
}
var myFunction = function()
{
  var message;

  if(10 > 5)
  {
    message = ‘hello there!’;
  }

  alert(message);
}

Having function scope rather than block scope is identified by Crockford as one of the ‘awful’ parts of JavaScript.

Final Words

In summary, this is a book that is well worth reading for anyone interested in the finer points of JavaScript. My next goal in developing knowledge and understanding in my chosen niche is to start looking at angular in more detail. I’m not yet sure how I will go about this, but the free tutorial over at codeschool.com looks like it could be a great place to start.

Is Contracting For You?

Ronnie Mukherjee 2 Comments

contract

I  have been contracting for around two and a half years now. This isn’t long at all in the grand scheme of things, nevertheless I thought I would share some thoughts on contracting compared with permanent employment, from my slightly limited point of view. If you are considering taking the plunge into the world of contracting, you may find this useful.

Being an Outsider

Becoming a contractor requires a slight psychological adjustment. As a permanent member of staff you become used to playing by the same rules as your colleagues. Everyone adheres to the same policies and principles, from working hours and holiday allowance to pension schemes, performance appraisals and career development plans. This leads to a subtle sense of belonging to a team, which needs to be recognised and compensated for by contractors. I am of course a part of my client’s team and I work hard to achieve success in my projects, but there is no denying that in some respects I will always be an outsider compared to permanent staff. It took me some time to get used to this.

Lack of Security

I have been fortunate enough to have never been short of work since becoming a contractor. However, by the very nature of contracting it is necessary to always keep one eye on your next job. When times are hard in industry, contractors are often the first to go, and it is important to never take one’s position for granted. It is true that anyone can be made redundant, however contractors obviously change their place of work more frequently, and this means more job hunting, more interviews, and potentially more stress. The idea of being out of work is frightening for anyone. We all have responsibilities to meet and bills to pay.  For contractors this fear is likely to rear its head more often than for permanent employees.

More Paperwork

If you set yourself up as a limited company, as most contractors do, there will be some additional paperwork for you to do to manage your business finances and payment of taxes. If you hire an accountancy this will help, however you will still have at least a little admin work to do each month, and accountants obviously charge you for their time.

Expectations

As a contractor you will be expected to have a certain degree of technical expertise. You will be expected to hit the ground running. Your employers will not hire you based on potential or willingness to learn, you will be expected to have already fulfilled much potential and learnt a great deal. This can be daunting, but equally it can motivate you to actually spend some time and effort on developing and maintaining your skills, and thus speed up your development. Since becoming a contractor I have spent more of my personal time learning than I ever did when I was a regular employee.

Getting a Mortgage

One aspect of contracting I had not considered was the added complications when applying for a mortgage. I have recently moved house and my first mortgage application was rejected because I did not have two full years of accounts to show the bank. Thankfully another bank was more flexible. If you are thinking of applying for a mortgage in the next two to three years then now may not be the best time for you to become a contractor. At the very least do some research. Any other application which requires proof of a steady income may be subject to similar complications.

More Money

This is perhaps the biggest draw to the world of contracting. It is no secret that contractors are paid more for their time than permanent members of staff. It is easy to overestimate the difference however. We still need to pay taxes and we do not receive any paid holidays, pension contributions, company car or other benefits. Furthermore, there will probably be times in between contracts when we are looking for work and obviously not being paid at all. That said, my earnings through contracting are higher than they would be if I were a permanent employee. If you can repeatedly find work, then you will likely receive more money as a contractor, but you must be confident in your ability to find that work.

Varied Experience

As valuable as book-reading and personal projects are, in my opinion there is no substitute for commercial experience when it comes to your professional development. As a contractor you will experience a great deal of variety with regards to projects, people, working environments and working practices. This will allow you to better understand what actually matters and what doesn’t when it comes to achieving commercial success. You will probably work with a variety of different tools, on projects of different lengths, and will have to communicate with different types of colleagues and customers. You are less likely to get bored and as scary as it can be not knowing where your next job will come from, the anticipation of a new challenge can be exciting.

Freedom From Office Politics

Successful organisations are of course made up of people who want to get ahead. Naturally everyone is aiming for that big promotion and pay rise. This is just human nature and there is certainly nothing fundamentally wrong with it. One consequence of this however can be a high level of unhealthy competitiveness in the workplace. Sometimes employees can be so eager to impress the boss that they will engage in questionable behaviour. This might be taking credit for someone else’s work, failing to accept responsibility for mistakes made or even worse, blaming someone else. As a contractor there is no guarantee of freedom from such political games, but as you are outside of the race for promotion, you are less likely to become embroiled in them.

Autonomy

Permanent employees are often guided towards learning about particular technologies or programming languages, according to the needs of the business. Contractors on the other hand have a greater degree of control over their professional development path. We can assess the market and find a compromise between which skills are in demand, and which areas we are interested in learning about. This freedom to be your own career development manager can be quite liberating.

Final Thoughts

If you are thinking of becoming a contractor then I hope my thoughts have helped you. It is not for everyone. There are certainly advantages to permanent employment, and the right decision will depend upon your individual circumstances, including your financial responsibilities, your skill set, and your geographical location. I took all these things into account when I made my decision, and I certainly have no regrets, but I would encourage anyone to look very carefully before you leap.

Life Without Resharper

Ronnie Mukherjee 0 Comments

I recently (finally) upgraded to visual studio 2013. One unfortunate consequence of this was that the version of Resharper I was using (v7) was no longer supported. I was disappointed to find that my license was not enough to obtain a free upgrade, therefore I was faced with a choice: either buy a license for Resharper 8, or try programming without Resharper in the hope that VS2013 would offer enough features to allow me to achieve similar levels of productivity and flow. There is a certain appeal to the idea of learning to code without Resharper. We all know dependencies should be minimized.

So I decided to try life without Resharper.

I lasted 3 days.

Admittedly VS2013 does offer adequate or even superior equivalents for many of Resharper’s features. For example, I actually prefer Visual Studio’s unit test runner to Resharper’s. It seems to run faster and is more reliable. Extracting a method is also fine using Visual Studio alone. However there are a number of features which make programming with Resharper not only faster, but more satisfying.

The Alt-Enter Panacea

The Alt-Enter cure-all keyboard shortcut offered by Resharper is genius. The official name for this command is ‘show action list’, but the brilliant thing about it is its sensitivity to your current context. Resharper knows what actions you may want to perform given the position of your cursor. You might want to rename a variable to adhere to coding standards, rename a file to match the name of a class, generate a method stub for some calling code you’ve just written, remove an unused variable, remove unused using statements… the list goes on. There is no equivalent context-specific intelligence in Visual Studio. The best we can do is to learn the keyboard shortcut for each useful action, but there are several actions offered by Resharper’s Alt-Enter command that appear to have no equivalent in VS2013. For example, the only way I can see to locate and report on unused variables without Resharper is through Visual Studio’s static code analysis feature. This can either be run manually on demand, or each time you build your project. You are then required to manually remove each occurrence of dead code that is reported. Compare this to Resharper which greys out dead code as you write it, and lets you quickly and automatically remove it using Alt-Enter. The same can be said about many of Resharper’s best features. Resharper’s analysis is dynamic, whereas Visual Studio’s is static. The significance of this difference shoulod not be underestimated. With dynamic analysis it is far easier to gain and maintain momentum as you code.

Go To Implementation

In trying to code without Resharper, it wasn’t long before I needed a way to go directly to the implementation of a method, from calling code which referenced an interface. No problem, I thought, I’ll just look up the keyboard shortcut. After a little Googling, I was amazed to find that Visual Studio has no ‘Go To Implementation’ shortcut. I still can’t believe this. We live in a world where we are told (quite rightly) to favour dependency injection in the form of interfaces. Therefore a huge portion of our calling code will call methods defined on interfaces. The value of the ‘Go To Definition’ command is obvious: we want to look at the code contained within a called method. Thus, the ‘Go To Implementation’ command is just as valuable. Yet Visual Studio does not have it. The closest equivalent I could find was a 3-step process described in this stackoverflow discussion. A 3 step process! This is simply not good enough and is one of the key reasons that I decided to go back to Resharper.

Business idea or invention icon -  brain with gear wheel and light bulb

Conclusions

When I decided to try coding without Resharper I was fairly confident I would be fine. I thought it would just be a case of learning some Visual Studio keyboard shortcuts. But in terms of helping programmers to produce good quality code more quickly, Resharper is streets ahead of VS2013. Its dynamic code analysis is almost like pair programming with an observant and capable partner, who will tell you how to keep your code clean and tidy as you write it, and step in to perform uninteresting and repeatable tasks such as removing dead code and generating methods. I wouldn’t say Resharper is cheap, but there is just no cheaper alternative. CodeRush by DevExpress is apparently a good product but it also isn’t cheap. I believe we do our best work and gain the most satisfaction when we can enter a state of flow when coding. Our brains work a lot quicker than our hands, and we need to be able to express an idea as quickly as possible, to allow us to move on to our next idea. To anyone who spends a lot of time coding in Visual Studio without Resharper, I would urge you to download the free trial and give it a try. If you can get used to it in 30 days, you will probably never go back.

Information Hiding in JavaScript

Ronnie Mukherjee 2 Comments

info hiding

A key concept in object-oriented programming is information hiding. It refers to the practice of declaring some parts of a class public, and others private, depending on what we want clients of the class to be able to see and do. It protects the application from programmers who may decide to use a class in ways which are contrary to the original intentions of the class’s author.

Unlike object-oriented languages such as Java and C#, the JavaScript syntax does not include keywords such as ‘public’ and ‘private’ (access modifiers) which would allow programmers to practice information hiding quickly and easily. However many programmers, myself included, are used to designing applications based on object-oriented programming, so we need a way to simulate this concept using the features available in the JavaScript language.

There are essentially two ways to do this: using constructor functions, and using the module pattern.

Constructor Functions

I touched on constructor functions in my last post. These are basically functions which are intended to be called with the ‘new’ keyword. By calling a function in this way, we create an object based on the function, with its own state, and where any occurrence of the ‘this’ prefix inside the function will create a public property or function on the created object.

This feature of JavaScript was added to appeal to programmers coming from an object-oriented background, where the ‘new’ keyword is a fundamental feature.

Notice in the example below how the name of the function is capitalized – this is conventional when declaring constructor functions to remind us to treat it as such and to call it with the ‘new’ keyword.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
function Rectangle() // capitalized name convention
{
    // private stuff
    var height;
    var width;
    
    // public stuff
    this.getArea = function(){
        return height*width;
    };
    
    this.setWidth = function(w){
        width = w;
    };
    
    this.setHeight = function(h){
        height = h;
    };
    
    this.shapeType = 'rectangle';
}
 
var rect = new Rectangle(); // use new keyword
rect.setWidth(4);
rect.setHeight(3);
alert(rect.getArea()); // alerts 12
alert(rect.shapeType); // alerts 'rectangle'
alert(rect.height); // alerts 'undefined'
 
var rect2 = new Rectangle();
rect2.setWidth(10);
rect2.setHeight(2);
alert(rect2.getArea()); // alerts 20
alert(rect2.shapeType); // alerts 'rectangle'
alert(rect2.height); // alerts 'undefined'
function Rectangle() // capitalized name convention
{
    // private stuff
    var height;
    var width;
    
    // public stuff
    this.getArea = function(){
        return height*width;
    };
    
    this.setWidth = function(w){
        width = w;
    };
    
    this.setHeight = function(h){
        height = h;
    };
    
    this.shapeType = 'rectangle';
}

var rect = new Rectangle(); // use new keyword
rect.setWidth(4);
rect.setHeight(3);
alert(rect.getArea()); // alerts 12
alert(rect.shapeType); // alerts 'rectangle'
alert(rect.height); // alerts 'undefined'

var rect2 = new Rectangle();
rect2.setWidth(10);
rect2.setHeight(2);
alert(rect2.getArea()); // alerts 20
alert(rect2.shapeType); // alerts 'rectangle'
alert(rect2.height); // alerts 'undefined'

Constructor functions also allow us to modify all instances, including ones which have already been created, by updating the function’s prototype property, as shown below.

1
2
3
var rect = new Rectangle();
Rectangle.prototype.numberOfCorners = 4;
alert(rect.numberOfCorners); // alerts '4'
var rect = new Rectangle();
Rectangle.prototype.numberOfCorners = 4;
alert(rect.numberOfCorners); // alerts '4'

The Module Pattern

The module pattern essentially describes the practice of writing a function which returns an object literal. The function represents our object-oriented class. Any client of this ‘class’ has access only to the object literal returned by the function. Therefore, members defined as part of this object literal are effectively public. Any members declared inside of the function are private, however, they are accessible from the object literal thanks to the concept of closures in JavaScript.

We can see how this works in code in this example.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
function getRectangleInstance()
{
    // private stuff
    var height;
    var width;
    
    // public stuff on returned object
    return {
        getArea: function(){
            return height*width;
        },
        setWidth: function(w){
            width = w;
        },
        setHeight: function(h){
            height = h;
        },
        shapeType: 'rectangle'
    };
}
 
var rect = getRectangleInstance();
rect.setWidth(4);
rect.setHeight(3);
alert(rect.getArea()); // alerts 12
alert(rect.shapeType); // alerts 'rectangle'
alert(rect.height); // alerts 'undefined'
 
var rect2 = getRectangleInstance();
rect2.setWidth(10);
rect2.setHeight(2);
alert(rect2.getArea()); // alerts 20
alert(rect2.shapeType); // alerts 'rectangle'
alert(rect2.height); // alerts 'undefined'
function getRectangleInstance()
{
    // private stuff
    var height;
    var width;
    
    // public stuff on returned object
    return {
        getArea: function(){
            return height*width;
        },
        setWidth: function(w){
            width = w;
        },
        setHeight: function(h){
            height = h;
        },
        shapeType: 'rectangle'
    };
}

var rect = getRectangleInstance();
rect.setWidth(4);
rect.setHeight(3);
alert(rect.getArea()); // alerts 12
alert(rect.shapeType); // alerts 'rectangle'
alert(rect.height); // alerts 'undefined'

var rect2 = getRectangleInstance();
rect2.setWidth(10);
rect2.setHeight(2);
alert(rect2.getArea()); // alerts 20
alert(rect2.shapeType); // alerts 'rectangle'
alert(rect2.height); // alerts 'undefined'

This variation of the module pattern (there are many), allows us to easily create multiple instances of a class, each of which has its own state.

Which One to Use?

The question of whether to use constructor functions or the module pattern to enforce information hiding in JavaScript is largely a personal choice. In JavaScript – The Good Parts, Douglas Crockford seems to identify constructor functions as a ‘bad part’ of JavaScript, his reasoning being that forgetting to use the ‘new’ keyword when calling a constructor function can lead to unexpected behaviour. However, constructor functions do allow us to easily modify all instances of a class by modifying the function’s prototype property.

Personally I don’t really see a problem with using constructor functions. Or rather, I don’t see the possibility of forgetting to use ‘new’ as enough of a reason to avoid using them. Just try not to forget! Constructor functions are a powerful feature and they arguably make for more readable code, particularly to those coming from an object-oriented background, not to mention the added advantage of emulating inheritance by accessing the function’s prototype.

As the name suggests, I see the module pattern as being more useful to emulate modules, rather than classes which act as templates for multiple instances. By using the module pattern to define an immediately invoked function expression, we can create a module which is not intended to be instantiated multiple times, but which encapsulates a set of functionality and avoids littering the global namespace. In the example below we implement the module pattern in this way, declaring an immediately invoked function to assign an object to the shapeModule variable. This object, or module, can contain its own private members, and exposes the getRectangleInstance function, along with any other desired public members. However, by immediately invoking the shapeModule function, we lose the ability to create multiple instances of it. Therefore this variation of the module pattern is often described as emulating the concept of a namespace, rather than a class, in Java or C#.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
var shapeModule = function(){
    
    // stuff private to the shape module
    var privateField = 'someValue';
    
    return {
        getRectangleInstance: function(){
            // stuff private to the rectangle instance
            var height;
            var width;
                
            return {
                getArea: function(){
                    return height*width;
                },
                setWidth: function(w){
                    width = w;
                },
                setHeight: function(h){
                    height = h;
                },
                shapeType: 'rectangle'
            };            
        },
        getCircleInstance: function(){
            // code for circles
        }
    };        
}(); // function is immediately invoked
 
var rect = shapeModule.getRectangleInstance();
alert(rect.shapeType); // alerts 'rectangle'
var shapeModule = function(){
    
    // stuff private to the shape module
    var privateField = 'someValue';
    
    return {
        getRectangleInstance: function(){
            // stuff private to the rectangle instance
            var height;
            var width;
                
            return {
                getArea: function(){
                    return height*width;
                },
                setWidth: function(w){
                    width = w;
                },
                setHeight: function(h){
                    height = h;
                },
                shapeType: 'rectangle'
            };            
        },
        getCircleInstance: function(){
            // code for circles
        }
    };        
}(); // function is immediately invoked

var rect = shapeModule.getRectangleInstance();
alert(rect.shapeType); // alerts 'rectangle'

The important message here is that it is possible, and desirable, to implement information hiding in JavaScript, and once you know how it is not too difficult to do so.

Something Like “this”

Ronnie Mukherjee 3 Comments

Few people learn JavaScript as a first programming language. Typically people will start with an object-oriented language such as C#, Java or C++. These languages have a few things in common, such as curly braces, objects, functions, if-statements and loops. Another common feature of object-oriented programming languages is the ‘this’ keyword. It refers to the class which contains the currently executing code.

It can be something of a surprise then when a programmer moves on to JavaScript and discovers that the value returned by ‘this’ is less predictable than expected.

There are five different cases where you might use ‘this’. These include using it in the global namespace, and four different ways of using it inside a function, as described by Douglas Crockford in JavaScript – The Good Parts.

‘this’ in the Global Namespace

In the global namespace, that is, outside of any function, ‘this’ refers to the window object. I don’t see any reason why anyone would want to use ‘this’ in the global namespace, in fact it’s probably a bad idea, nevertheless it can be done and so is worth mentioning for completeness.

Scenario 1 – Method Invocation

In JavaScript (and many other languages) a method is a function which is declared inside of a class or object. Referring to ‘this’ inside of a method refers to the containing object. Therefore in the example below we are alerted with ‘Roberto Martinez’ rather than ‘Howard Kendall’.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// Set global variable
var manager = 'Howard Kendall';
 
// 1. Method Invocation
var everton = 
    {
        yearFounded : 1878,
        inEurope : true,
        manager : 'Roberto Martinez',
        getManager : function()
        {
            return this.manager;
        }
    };
 
alert(everton.getManager()); // alerts 'Roberto Martinez'
// Set global variable
var manager = 'Howard Kendall';

// 1. Method Invocation
var everton = 
    {
        yearFounded : 1878,
        inEurope : true,
        manager : 'Roberto Martinez',
        getManager : function()
        {
            return this.manager;
        }
    };

alert(everton.getManager()); // alerts 'Roberto Martinez'

This is the kind of behaviour you would expect coming from an object-oriented background, so no unpleasant surprises here.

Scenario 2 – Function Invocation

Here we are referring to any function which is not declared inside an object. That is, any function which is declared in the global namespace. Using ‘this’ inside such functions actually refers to the global namespace, rather than the containing function. Strange behaviour indeed, and Douglas Crockford actually describes this behaviour as a mistake by the creators of JavaScript. Thus in the example below, we are alerted with ‘Howard Kendall’, rather than ‘Roberto Martinez’.

1
2
3
4
5
6
7
8
9
10
11
// Set global variable
var manager = 'Howard Kendall';
 
// 2. Function Invocation
var getManager = function()
{
    var manager = 'Roberto Martinez';
    return this.manager;    
};
 
alert(getManager()); // alerts 'Howard Kendall'
// Set global variable
var manager = 'Howard Kendall';

// 2. Function Invocation
var getManager = function()
{
    var manager = 'Roberto Martinez';
    return this.manager;    
};

alert(getManager()); // alerts 'Howard Kendall'

So we have established that ‘this’ inside of a method refers to the method’s containing object, and ‘this’ inside of a global function refers to the global namespace. But what is referred to by ‘this’ inside of a function which is declared inside of a method? The answer is – the global namespace, as shown in the example below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
var manager = 'Howard Kendall';
 
var everton = 
    {
        yearFounded : 1878,
        inEurope : true,
        manager : 'Roberto Martinez',
        getManager : function()
        {            
            var innerFunction = function()
            {
                return this.manager;
            };            
            return innerFunction();
        }
    };
 
alert(everton.getManager()); // alerts 'Howard Kendall'
var manager = 'Howard Kendall';

var everton = 
    {
        yearFounded : 1878,
        inEurope : true,
        manager : 'Roberto Martinez',
        getManager : function()
        {            
            var innerFunction = function()
            {
                return this.manager;
            };            
            return innerFunction();
        }
    };

alert(everton.getManager()); // alerts 'Howard Kendall'

This behaviour is slightly surprising. If we want to use an inner function inside of a method, we can get around the problem by assigning ‘this’ to a variable inside the outer method, as shown below with the ‘that’ variable.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
var manager = 'Howard Kendall';
 
var everton = 
    {
        yearFounded : 1878,
        inEurope : true,
        manager : 'Roberto Martinez',
        getManager : function()
        {
            var that = this;
            
            var innerFunction = function()
            {
                return that.manager;
            };
            
            return innerFunction();
        }
    };
 
alert(everton.getManager()); // alerts 'Roberto Martinez'
var manager = 'Howard Kendall';

var everton = 
    {
        yearFounded : 1878,
        inEurope : true,
        manager : 'Roberto Martinez',
        getManager : function()
        {
            var that = this;
            
            var innerFunction = function()
            {
                return that.manager;
            };
            
            return innerFunction();
        }
    };

alert(everton.getManager()); // alerts 'Roberto Martinez'

Scenario 3 – Constructor Invocation

A third function scenario is that of the constructor function, that is, a function which is invoked with the ‘new’ keyword. When a function is invoked in this way, the ‘this’ keyword refers to the object created, even if the constructor function was defined in the global namespace. Therefore, in the example below we are alerted first with ‘Howard Kendall’ and then with ‘Joe Royle’. We refer to ‘this’ inside of a global namespace function when we set the ‘manager’ variable inside of ‘getEverton’, but as we call the function with the ‘new’ keyword, the global namespace ‘manager’ variable is not updated.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
var manager = 'Howard Kendall';
 
// 3. Constructor Invocation
var getEverton = function()
{
    this.yearFounded = 1878;
    this.inEurope = true;
    this.manager = 'Joe Royle'; 
    
    this.getManager = function()
    {
        return this.manager;
    };
};
 
var efc = new getEverton();
alert(manager); // alerts Howard Kendall
alert(efc.getManager()); // alerts Joe Royle
var manager = 'Howard Kendall';

// 3. Constructor Invocation
var getEverton = function()
{
    this.yearFounded = 1878;
    this.inEurope = true;
    this.manager = 'Joe Royle'; 
    
    this.getManager = function()
    {
        return this.manager;
    };
};

var efc = new getEverton();
alert(manager); // alerts Howard Kendall
alert(efc.getManager()); // alerts Joe Royle

The same behaviour is displayed when we call a method declared on the constructor function’s prototype, and properties can be set on our new object in the normal way.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
// 3. Constructor Invocation
var getEverton = function()
{
    this.yearFounded = 1878;
    this.inEurope = true;
    this.manager = 'Joe Royle';    
};
 
var efc = new getEverton();
getEverton.prototype.getManager = function()
{
    return this.manager;
};
    
alert(efc.getManager()); // alerts 'Joe Royle'
 
var efc2 = new getEverton();
efc2.manager = 'Harry Catterick';
alert(efc2.getManager()); // alerts 'Harry Catterick'
// 3. Constructor Invocation
var getEverton = function()
{
    this.yearFounded = 1878;
    this.inEurope = true;
    this.manager = 'Joe Royle';    
};

var efc = new getEverton();
getEverton.prototype.getManager = function()
{
    return this.manager;
};
    
alert(efc.getManager()); // alerts 'Joe Royle'

var efc2 = new getEverton();
efc2.manager = 'Harry Catterick';
alert(efc2.getManager()); // alerts 'Harry Catterick'

So to summarise constructor invocation, it reflects typical object-oriented behaviour more closely that function invocation. However, it can be dangerous to rely on this approach, as if we forget to use the ‘new’ keyword, our constructor function will be invoked as a regular global function, and we will see the behaviour described above in scenario 2.

Scenario 4 – Apply Invocation

Our final function invocation scenario is the apply invocation pattern. The apply method is defined on the function prototype, therefore it can be invoked on any function. This approach to invocation allows us to provide any object we want to represent ‘this’ inside of the function. Its first argument is bound to ‘this’, and its second argument is an optional array of parameters to be passed to the function. Thus in the example below, we are first alerted with ‘Joe Royle’ and then with ‘Harry Catterick’.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
var everton = 
    {
        yearFounded : 1878,
        inEurope : true,
        manager : 'Roberto Martinez',
        getManager : function()
        {
            return this.manager;
        }
    };
 
var getEverton = function()
{
    this.manager = 'Joe Royle';    
};
 
var efc = new getEverton();
var efc2 = new getEverton();
efc2.manager = 'Harry Catterick';    
 
// 4. apply() Invocation
alert(everton.getManager.apply(efc)); // alerts 'Joe Royle'
alert(everton.getManager.apply(efc2)); // alerts 'Harry Catterick'
var everton = 
    {
        yearFounded : 1878,
        inEurope : true,
        manager : 'Roberto Martinez',
        getManager : function()
        {
            return this.manager;
        }
    };

var getEverton = function()
{
    this.manager = 'Joe Royle';    
};

var efc = new getEverton();
var efc2 = new getEverton();
efc2.manager = 'Harry Catterick';    

// 4. apply() Invocation
alert(everton.getManager.apply(efc)); // alerts 'Joe Royle'
alert(everton.getManager.apply(efc2)); // alerts 'Harry Catterick'

I have attempted here to clearly describe the different behaviours you might see when using the ‘this’ keyword in JavaScript. However, it is a difficult area to describe in plain english. If you find my descriptions confusing I recommend reading Crockford’s book. All the code examples are available to play with at this jsfiddle.