The Road to Angular 2.0 part 6: Migration

Intro I gave a presentation at the GOTO conference in Amsterdam titled: The Road to Angular 2.0. In this presentation, I walked the Road to Angular 2.0 to figure out why Angular 2.0 was so different More »

The Road to Angular 2.0 part 5: Bindings

Intro I gave a presentation at the GOTO conference in Amsterdam titled: The Road to Angular 2.0. In this presentation, I walked the Road to Angular 2.0 to figure out why Angular 2.0 was so different More »

The Road to Angular 2.0 part 4: Components

Intro I gave a presentation at the GOTO conference in Amsterdam titled: The Road to Angular 2.0. In this presentation, I walked the Road to Angular 2.0 to figure out why Angular 2.0 was so different More »

The Road to Angular 2.0 part 3: Types

Intro I gave a presentation at the GOTO conference in Amsterdam titled: The Road to Angular 2.0. In this presentation, I walked the Road to Angular 2.0 to figure out why Angular 2.0 was so different More »

The Road to Angular 2.0 part 2: ES6

Intro I gave a presentation at the GOTO conference in Amsterdam titled: The Road to Angular 2.0. In this presentation, I walked the Road to Angular 2.0 to figure out why Angular 2.0 was so different More »


The Road to Angular 2.0 part 6: Migration


I gave a presentation at the GOTO conference in Amsterdam titled: The Road to Angular 2.0. In this presentation, I walked the Road to Angular 2.0 to figure out why Angular 2.0 was so different from 1.x.

This series of blogposts is a follow up to that presentation.


Last week we discussed the new bindings system in Angular 2.0, and we saw that by using trees Angular 2.0 applications use less memory and are faster than before.

In this final installment of this series, we are going to look at how we can migrate our Angular 1.x applications to Angular 2.0.

The road to 2.0

How do you migrate an Angular 1.x app to Angular 2.0?


The answer you might hope for is running some magical wizard in some fancy IDE that will take care of the process for you. Does such a magical solution exists? The answer, unfortunately, is no. There is no easy way to migrate your applications without doing some work yourself.

Here is a video from the ng-conf showing how you can migrate code from Angular 1.3 to Angular 2.0. It is worth a view and shows the manual work required to upgrade an Angular application.

A tale of two roads

The Angular team states that there are two roads to migrate an Angular 1.x app to 2.0: Big Bang and Incremental.


Big Bang

Big Bang is a migration path in which you halt all development on an application, and migrate an entire application to Angular 2.0 in one Big Bang.

The biggest benefit of a Big Bang migration is that it is the fastest way to get to Angular 2.0. This means you can use al the cool new features such as components, TypeScript and the new Template syntax as soon as possible.

Big Bang has a couple of drawbacks: first it might be difficult to convince your manager or product owner to freeze the product you are working on. Performing a Big Bang migration whilst the application is changing underneath your feet is not something I would recommend. So it is imperative that the application is frozen so the target does not move. Trying to sell this to your manager / product owner is going to be difficult.

The second drawback is that the size of your application determines how easy it is to pull a Big Bang off. The larger your application the more time it will take to perform the Big Bang. The more time it takes to perform a Big Bang the more difficult it is to get the application freeze approved.

The third drawback is that when you rely on third party libraries such as: ui-bootstrap or restangular, you will have to wait until they’ve upgraded to 2.0 as well. This means that you cannot perform a Big Bang until each and every one of your dependencies has upgraded to 2.0. Of course you could work around this problem by dropping a dependency and writing it yourself, but this can be a lot of work, especially if your application has “big” dependencies that do most of the work in your application.


Incremental is a migration path in which you update parts of your application with Angular 2.0 code, and keep parts of your code Angular 1.x code. This is possible because you can run Angular 2.0 applications and 1.x applications side by side.

You can do this in two flavors: either you have an Angular 2.0 app which includes Angular 1.x app, or vice versa you have an Angular 1.x app which includes an Angular 2.0 app. This gives us the freedom to mix and match Angular 1.x and 2.0 as we please.

For example: we can migrate everything from controllers to services to Angular 2.0, and keep some select directives Angular 1.3 such as ui-bootstrap. Another example is that we stick to Angular 1.x for our controllers and services, but write al of our new directives with Angular 2.0 components.

The benefit of Incremental is that we have a lot of flexibility in how we migrate our applications to Angular 2.0. Just like the Big Bang migration path Incremental also has some drawbacks.

The first drawback is that you will bundle Angular 1.x with 2.0, this means that the browser will have to download two complete frameworks, and parse two complete frameworks, this will impact the performance of your application negatively.

The second drawback is that having two frameworks, Angular 1.x and 20, with two very different philosophies,  will make  your code will look like a Chimera, a strange hybrid that is stuck between two worlds, not a pretty picture. The only way to fix this is to eventually migrate to Angular 2.0 completely.

Big Bang vs Incremental

The question you might ask so which is better Big Bang or Incremental? The answer is that it depends on the nature of your application and your projects circumstances. Here is a decision matrix:


Basically the matrix states that the smaller your application is the more Big Bang makes sense. This is because the time it takes to perform a Big Bang is directly related to the size of the  application.

Another facet in the decision is how many dependencies the application has. As stated before you can only upgrade to 2.0 completely when all your dependencies have upgraded. However some apps are more dependant on external dependencies than others, if you for example depend heavily on some big external Google Maps directive, it might make sense to wait until that directive has updated, and do an Incremental upgrade instead.

The last facet of the matrix is the time you can “get” to migrate to Angular 2.0. This is really a circumstance which is more political than technical, it depends on management. If you get oceans of time to migrate, Big Bang makes more sense, if there is a focus on new features Incremental makes more sense.

Preparing for 2.0

There are steps you can take to prepare for an migration to Angular 2.0. The closer you can get your 1.x application to the 2.0 philosophy the easier it is to migrate to Angular 2.0.

Stop using $scope

In Angular 2.0 components will no longer have a ‘$scope’, instead the instance of the component’s controller will become the scope. To prepare for this change I recommend that you use the “controllerAs” syntax. This way you won’t have $scope’s that need to be removed when you migrate to 2.0.

Upgrade to 1.4

Upgrading to 1.4.x: the latest stable version of Angular is a good step to take. Upgrading to 1.4 will make it easier to migrate to Angular 1.5, which brings me to my next point.

Upgrade to 1.5 when it is released

The goal of Angular 1.5 is to make migrating from Angular 1.x to 2.0 easier. What the exact nature of these features are is still in flux. One feature that I’m personally rooting for is the “component helper” function. This will make it easier to write directives that mimic Angular 2.0 components. By mimicking Angular 2.0 components Angular 1.5 will be closer to the philosophy of Angular 2.0, being closer to the Angular 2.0 philosophy makes migrating easier.

Start using ES6 today!

Using a transpiler such as Babel you can start writing ES6 today. A transpiler transforms ES6 code to ES5 code, so you application will run in todays browsers.

The biggest benefit of ES6 is that it allows you to write “classes”, Angular 2.0 will heavily rely on classes, Components are classes for example. By using classes to define services and directive’s controllers you have already done some of the migration work.

The new Component router

Angular 2.0 will have a new built in router called the Component Router this router will, as its name suggests, routes on components, it will instantiate a component based on the current URL.

The nice thing about the Component Router is that the router will be back ported to Angular 1.5. This means you can start using the new router in 1.5 and when you upgrade to Angular 2.0, migrating your routes is already done.

If you use ui-router today instead of ngRoute, you might want to read up on the differences between ui-router and the Component Router.

Note that the Component Router was called the ngNewRouter until a couple of months ago.

A new Hope

At the AngularU conference key Angular core team members gave a keynote in which they announced that Google has some internal tools to make migrating easier. They are in the process of evaluating which tools are useful to release to the Angular community. There isn’t much information on these tools so there isn’t much more to tell you, but there is some hope that migrating will easier, and more importantly partly automated.

Starting new projects

I’ve often received the following question: “I’m starting a new project, should I wait for Angular 2.0, or should I start in Angular 1.4?” The answer is to just start using Angular 1.4, and migrate to 2.0 later.

The reason for this is because Angular 1.x is not abandoned , in fact it is quite the opposite. The Angular team has been split into two teams: one team for 2.0 and one team for 1.x. The 1.x team even has a new project lead: Pete Bacon Darwin, so 1.x if far from abandoned. With Angular 1.5′s focus on migration from 1.x to 2.0, starting on 1.x and migrating to 2.0 will mean some work, but it will not be the end of the world, and if you follow my advice on preparing for 2.0 you will make migrating easier.

Another reason not to wait for angular 2.0 is because it still doesn’t have a release date, in fact there isn’t even a beta available yet. Hopefully we will learn more at the AngularConnect conference in London in October, hopefully they will announce something more concrete than: “it is done when it is done”.


Now you know the somewhat painful truth that migrating from Angular 1.x to 2.0 is not going to happen by a click of the button. We have seen that the Angular team has put together two migration paths: Big Bang and Incremental. A Big Bang migration get your project to Angular 2.0 as quickly as possible. The Incremental migration allows us ton combine 1.x and 2.0 in the same application, so we can migrate step by step.

We also know that we can prepare for Angular 2.0 by using a transpiler such as Babel to start using ES6 classes in our Angular applications today.  We should also upgrade our applications to the latests Angular 1.x version that is available, because that version is closest to Angular 2.0.

The final takeaway of this blogpost is that Angular 1.x is not going anywhere anytime soon, it is still actively being maintained, and the community is still alive and kicking. So starting a project in Angular 1.x and migrating to 2.0 later is a valid strategy.

I hope you enjoyed this series of blog posts and found them informative. Hopefully Angular 2.0 gets released soon, I think it will be a great leap forwards for us the Angular community.


The Road to Angular 2.0 part 5: Bindings


I gave a presentation at the GOTO conference in Amsterdam titled: The Road to Angular 2.0. In this presentation, I walked the Road to Angular 2.0 to figure out why Angular 2.0 was so different from 1.x.

This series of blogposts is a follow up to that presentation.


Last week we discussed Components, which showed us a fundamental new way to think about our Angular applications.

This week we are going to look at bindings, aka the way Angular automagically updates values in our views. The Angular 2.0 team put a lot of effort in making this system faster. The team reports a speed increase that is 3x to 10x times faster than Angular 1.x.

But something had to change fundamentally in order for this speed increase to be possible.

Bindings in Angular 1.x

Before we can understand why Angular 2.0 is faster than 1.x, and how the Angular 2.0 team did it, we must first look at how Angular 1.x handles bindings.

An Angular 1.x App

Let’s begin with a fictive Angular 1.x app which has four bindings: A, B, C and D, these bindings have relationships (or dependencies) with each other as depicted in the the image below:


This image above shows that binding A has a relationship to binding B and vice versa. This means that whenever A updates something might ‘change’ for component B, but it doesn’t necessarily have to. The same is also true in reverse: when B changes A might also need an update.

There can also be relationships that are ‘one’ sided, for example A only influences D but not the other way around

Bindings can also have subtle relationships to other bindings indirectly: B has a relationship to binding D through binding C.

The point is that relationships between bindings in Angular 1.x can become pretty complex. Even when you have only four bindings. So how does Angular 1.x know when to update a ‘binding’ and
show a different value inside the UI? The answer is dirty checking.

Dirty Checking

Dirty checking can be explained as follows: every time there might have been a change to the view, so whenever the user clicks on something, an $http request finishes, Angular will check the value of each binding and compare it to the old value. When the value is different between the two versions Angular will update the UI. When a value differs between it is considered ‘dirty’, hence the term ‘dirty checking’. The phase in which Angular checks for changes is called the “digest phase”.

To make this example more concrete: let’s say we have a variable called “age” and the value is currently 16. Then some event triggers the digest phase, now the new value of “age” is 17. Angular will compare 16 and 17 and it will say that a change has occurred and update the UI.

But what if you have a binding that depends on another binding, how does that get updated? We know, as humans, what the ‘relationship’ between two bindings is, and which one depends on the other, so we know intuitively in which order to evaluate the bindings. But how does Angular know which binding to evaluate first?

The answer is that Angular 1.x doesn’t know anything about the relationships between ‘bindings’. So it cannot know in which order to perform the ‘dirty’ checks. What Angular does instead is evaluate each binding until all bindings have stabilized, by stabilized I mean that they have stopped changing. Angular does this by running a “loop” that will evaluate each binding until the bindings “report” that they are stabilized.

This “loop” is called the digest cycle, which is a part of the digest phase. The digest cycle will resolve all bindings until none of the bindings have reported a change between runs of the cycle. The digest cycle is a subpart of the digest phase:


If you have two bindings with a relationship, it might occur that the digest cycle needs to run multiple times before both bindings no longer change. If the bindings does not stabilize after 10 cycles Angular gives up and you get an error. This magical limit of 10 is called the Time To Live, and you can even increase or decrease it if you want to.

When the digest cycle reports to the digest phase that it is complete and that the system is stable, only then will Angular re-render the views.

So to conclude: the digest cycle is a clever way for Angular 1.x to not know what your ‘relationships’ between bindings actually mean semantically, but still update those relationships in your UI correctly.

What’s wrong with 1.x bindings?

The Angular 1.x way of resolving bindings, and their complex relationships, is a really great way to solve a very complex problem. However there are three downsides to the 1.x approach:


Resolving bindings with complex relationships by checking them in a digest cycle can lead to suboptimal performance. If you have a very complex application with multiple complex relationships between bindings, it may take Angular 1.x’s digest cycle multiple loops before it can report, to the digest phase, that the system has stabilized.

In this sense resolving bindings can potentially become very expensive.


The system is also very unpredictable, if you gave me the following template:

And you would ask me: “How do the bindings get resolved, in this template?” I would not be able to give you an answer straight away. I would have to dive deep into the “PersonController” and into the “personView” directive before I can provide an answer, for instance what type of binging is “person” on personView? My answer would depending on what type of binding it is, but even I could not tell you how Angular would actually resolve the binding in the digest phase.

Basically the Angular 1.x bindings system is not deterministic, if you gave it the same inputs it might take a different route each time to get to the outcome. This property of the system makes it difficult to reason about an Angular 1.x application.



Bindings can sometimes even be unnecessary, consider the following: What if you know that a data structure never changes. For example: what if you rendered a menu based on an array of strings, and you knew that there was no possible way that array would ever change. In other words you know that you will never change the menu during the run of the application.

Doing dirty checking on such an immutable (never changing) data structure is a pure waste of time. There is no way to tell Angular 1.x that this structure needs to be absolved from the digest phase, so it gets evaluated each and every time.


Another situation can be that you have a component, which only changes when a specific event occurs. In other words the object will never change unless that particular event is fired. Again in Angular 1.x there is no way to tell the system that such a component exists. The system will dirty check that component even though we humans know it is futile.

Bindings in Angular 2.0

Now that we know how Angular 1.x handles bindings, and we know some of its flaws, we can look at how Angular 2.0 mitigates these flaws and improves on the system. Before we can do that we must first look at the anatomy of an Angular 2.0 application, because last week we learned that Angular 2.0 will be a component based framework, how does being component based affect the binding system?

Anatomy of an Angular 2.0 application

Lets say we have an application that provides us with weather information of various cities in the world. The application looks something like this:


The application consists of a Grid of WeatherStation’s each station consists of a name, temperature, humidity and an icon telling the current state of the weather. You can favorite a weather station by clicking on the “star” icon. Above the Grid are two bars, a SearchBar in which the user can filter the stations based on the name, and a SegmentedButton in which the user can toggle between All stations and the user’s favorite stations.

This application is written in Angular 2.0 so therefore it is component based. Components are composable, which means components can be nested inside of each other. The weather app’s structure looks something like this:


Each component is a direct or indirect descendant of one “root” component, in this case the WeatherApp component. This leads to an important realization: Angular 2.0 applications are trees!

What is so great about trees?

Trees are very easy to understand, because the relationships between components are instantly obvious. Compare the image of the relationships between bindings in Angular 1.x with the image of the weather apps tree, the relationships in Angular 1.x could quickly run out of control.

In Angular 2.0 there only exists one type of relationship between two components: A component can either be the parent of the other component, or a child of that component. In that relationship the parent component can send information down to the child component, and the child component can send events back to its parent.

The relationships between components in Angular 2.0 can be codified as follows:


Having this property makes it very easy to reason about components when you encounter them in a template:

In the template is is immediately clear that the [station] binding comes from the parent, in this case from the Grid’s “stations” property. It is also simple to deduce that the “(station-changed)” event calls “stationDidChange()” on the Grid component, because Grid is the parent of the Station component. In Angular 2.0 you can read a template and instantly understand the relationships between components.

The fact that Angular 2.0 applications are trees also influence the way the digest phase works. In 2.0 there is no more need for a digest cycle, because in order for Angular to resolve all the bindings it only needs to go from the top of the tree to the bottom of the tree once.

The reason for this is simple: a component can only receive data ([] bindings) from its parent component, ergo a child component can only evaluate its bindings when his parent bindings are resolved. So a child must wait for its parent. A parent cannot receive data from his children, because bindings only go down, this means that child components bindings cannot influence the parent. In other words Angular only needs to reach the “bottom” of the tree and it is done.

Here you can see the contrast between Angular 1.x and Angular 2.0’s change detection visually:


I listed Angular 1.x’s bindings system downsides it as being expensive, unpredictable and unnecessary. Given what we now know about Angular 2.0 you can say its system is not expensive, because there is no digest cycle anymore, which can behave sub optimally. The system is now predictable because the flow of data and events is clearly defined, so we humans can reason with it better. But what about “unnecessary” can we stop Angular 2.0 from doing things we know are not needed? Yes we can!

Change Detection

In Angular 2.0 we can take over the way Angular does change detection on a per component basis. This enables us to squeeze even more performance out of Angular 2.0 when it is absolutely needed.

By default Angular 2.0 will generate a change detector “class” for each component at runtime. So if you have a component called WeatherStation Angular 2.0 will generate a WeatherStation_ChangeDetector class. This class will read the “meta” data about your component, so all inputs and outputs, and generates a class that will do the dirty checking. This is why you have to state all the input and output for your components.

For example if the WeatherStation only has a “temperature” property this class might look something like this:

The reason Angular 2.0 generates such a specific “class” for every component is because JavaScript virtual machines (VM) can optimize the hell out of specific “code” way better than they can optimize “generic” code. In very technical terms VMs can optimize monomorphic code better than polymorphic code. Here is a great blogpost by Vyacheslav Egorov explaining why this is true. Angular 1.x could not be optimized well by VMs because it used a polymorphic checking algorithm.

Here’s the kicker: you can tell Angular that you want to implement the _ChangeDetector class yourself. This enables us to write immutable components and observable components. In fact the “immutable” behavior comes built in with Angular 2.0.

To make an component immutable you can do this:

This means that the @Component will only run the change detection when new bindings are pushed into the component. So when the component never receives new bindings it is never part of the digest phase. Which means that Angular 2.0 will not waste time checking something that we know will not change.

The cool thing about being able to set the way change detection works per component is that you can mix and match various strategies. Parts of your application can be immutable, parts can be “default” and some parts some exotic strategy you can come up with.

This gives us some powerful tools to prevent Angular from doing “unnecessary” things, which slow our apps down.

If you come out of this thinking you must declare your own “change detection” algorithm to get the performance boost Angular 2.0 promises you are wrong. Just the fact that Angular 2.0 applications are trees gives it a boost in performance alone. You can steer clear of defining you own strategies and it will still be fast, it is just there when you need it.

Graph Time


By now you want to see a graph showing that Angular 2.0 is faster, so here you go:


The graphs shows the performance of the same application written in various ways. On the left you see a red bar, which is a baseline application written in vanilla JavaScript. This baseline application is written in the most optimized (but ugly) JavaScript imaginable. It has zero levels of abstraction, every level of abstraction comes with a price in the form of less speed, so it is the fastest way to write a HTML application. The Angular 2.0 team uses this baseline application to see how fast they can get.

The blue bar on the right represents Angular 1.3: it’s 8.58 times slower than the baseline application. Next to that is Angular 2.0 in orange, this represents a “fresh” Angular 2.0 application. Next to that is a “green” bar which represents Angular 2.0 in a “hot” state, which means that it has cached some views.

As you can see a “fresh” Angular 2.0 is 3 times faster than 1.x. What is even nicer is that the more you click through an Angular 2.0 application the faster it becomes, at least two times faster. Angular 2.0 will provide view caching for you automatically.

Memory Pressure

The memory Angular 2.0 uses is also down dramatically as seen in this graph:


Memory efficiency is increasingly important in this mobile world that we live in, mobile devices do not have as much memory as their desktop cousins. Angular 2.0 prides itself on being a mobile first framework, so they have to take their memory consumption seriously.

The Angular 2.0 team announced at the Angular U conference’s keynote that they are not done optimizing yet! So expect even more speed when 2.0 is finally released.

Want to know more?

Victor Savkin a core contributor to the Angular project has some great blogposts about bindings:
• Or you can watch Victor explain it in a twenty minute video.


To make Angular 2.0 faster than ever, the nature of an Angular application had to change from a cyclic graph, to a tree. A cyclic graph is by nature very complex, it can point to everything from anything, a tree is nice and simple and only points down.  Having a tree makes Angular 2.0 applications easer to reason about. The speed and memory pressure graphs speak for themselves, the Angular team have outdone themselves, and they are not finished yet.

We have walked to Road to Angular 2.0 now and we have seen most area’s in which Angular 2.0 is different from Angular 1.x. What we haven’t talked about is how to cross the Rubicon ourselves: how de we migrate our own Angular 1.x applications to Angular 2.0. That is the topic of the final installment of this series.

The Road to Angular 2.0 part 4: Components


I gave a presentation at the GOTO conference in Amsterdam titled: The Road to Angular 2.0. In this presentation, I walked the Road to Angular 2.0 to figure out why Angular 2.0 was so different from 1.x.

This series of blogposts is a follow up to that presentation.



Last week we took a look at TypeScript and how it is going to improve our productivity. That combined with the posts about the new template syntax and the post about ES6, gives us a good perspective on how our Angular 2.0 is going to be written.

This week’s post is about Components, and unlike the previous weeks topics, components changes the way we think about our Angular 2.0 applications, not just how we write Angular.

Lets start by looking what a component actually is.

What is a component?

A component in Angular is defined as a class which has both a Controller and a View. In Angular 2.0 your entire application will consist of Components that work together and build on top of each other.

Officially an Angular 2.0 component is called a “Component Directive” but we will use the term “Component” because it is used more often.

This is what a component looks like:

Surprise! You have already seen a complete component in last weeks post about Types. A component is a Controller and View wrapped in a class, so what are the Controller and View in the example above? The answer is that @View is, perhaps unsurprisingly, the View. The Controller is the instance of the PersonComponent, with all properties and methods that go with it.

A component also has a @Component annotation. This basically means that a class which has the @View and @Component annotations is a Component.

Input / Output

A property of a component is very clearly defined in terms of input and output.
This means that every piece of input and output must be explicitly defined. This means that if a component requires a directive, that you must explicitly state so, and provide that directive. If the component has a specific event, you want the rest of the world to know of, you must explicitly declare that event. Here is how I visualize a component:


Let’s define what Input and Output mean in the context of Components.


Lets say we have component which uses the NgIf directive inside of its template:

In order for the NgIf to work we must first import NgIf from angular2 itself. Then we must explicitly declare that the @View uses the NgIf directive by adding it to the “directives” property on the @View annotation.

Another example is if we have a component which has a property that can be bound to inside of a template. For example: a “name” property on the person component, in order for this to work:

We must declare our component like this:

So in order for the component to have a HTML property we must define it explicitly before hand inside of the “properties” object.


The output of a component, like input, must be explicitly defined as well. For example if we give our PersonComponent an “upvote” event which can be used like this:

We must declare our component like this:

We must add the “upvote” event to the array of “events” within the @Component.

Benefits of Components

What are the benefits of components as Angular 2.0 describes them?

The first benefit is that components are easy to reason with, this is because components are so strictly defined in terms of inputs and outputs. Just by reading the definition of a component it becomes clear what it dependencies are and what events you can subscribe to.

Having clearly defined components is also great for your text editor and IDE. They can read the definition of your component, and provide you with better autocompletion. But I can also imagine tooling which will analyse your project and tell you which built-in directives you use. In fact the tooling could use that information to strip Angular 2.0 down to the bare core that your application needs.

Another property that makes components great is that they are composable. You can view components as lego blocks from which you can build more complex things, such as houses, and from houses you can then make an entire city, and so on. Imagine a trashcan button with an “are you sure” message, you can use that throughout many components in your application.

Components can also be reused quite easily, because they are isolated. For example the trashcan button, it can easily be copied from project A to another project B. Figuring out what the dependencies of the trashcan button are is as simple as looking at its definition.

Origin Story

Every hero needs a good origin story. The origin of the Component Directive lies in two APIs from Angular 1.x: the directive API and the Controller API. These two APIs had some overlapping use cases:


Many of the user cases for the controller you could implement with directives instead, the inverse is also true, many of the user cases for a directive you could implement with a controller. If you have ever taught Angular 1.x to someone you will often get the question: How do I decide when to use a controller or a directive? This question is very difficult to answer.

But as it turns out the developers of Angular 1.x really wanted us to use directives a lot more than they wanted us to use controllers. I think that the reason people gravitate to controllers is that most of us come from a traditional MVC background such as Spring MVC or Rails. This makes you are naturally inclined to use “controllers”, since that is what you know best.

So to solve the problem of having two competing APIs and to guide people to using directives, they merged the two APIs into one API to rule them all: the Component Directive.

When you hear that the controller and directive APIs are dead, you now know that they live on in their love child: the Component Directive.


Components give us a fundamental new way to build Angular applications, in a composable and reusable way. Components will be the bread and butter of Angular 2.0 applications.

Next week we will look at “bindings” in Angular 2.0: how multiple components team up to form Angular 2.0 applications, and how information and events flow between components. Then you will understand why angular 2.0 is 3x to 10x faster than an Angular 1.x application!

The Road to Angular 2.0 part 3: Types


I gave a presentation at the GOTO conference in Amsterdam titled: The Road to Angular 2.0. In this presentation, I walked the Road to Angular 2.0 to figure out why Angular 2.0 was so different from 1.x. This series of blogposts is a follow up to that presentation. theroadtoangular4


Last week we looked at ES6, the next version of JavaScript, and how it is going to change the way we write our Angular 2.0 code. However ES6 was not enough for the Angular 2.0 team. They wanted to add types and annotations to JavaScript. So what the Angular team did was create their own language, called AtScript, which compiled down to JavaScript, which included types and annotations. Microsoft was also working on a language with types which transpiles back to ES5. That language is called TypeScript, and has been developed since 2012. The only thing TypeScript missed, according to the Angular 2.0 team, were annotations. So the two teams got together and the Angular team convinced the TypeScript folks to add annotations. Now there was no more need for AtScript, and it was abandoned in favor for TypeScript. Why create your own language when there is already a better alternative?


TypeScript is a superset of JavaScript, this means that all valid ES5 is valid TypeScript code. This means you can copy and paste the JS you write today and paste it in a TypeScript file and it will just work. Of course TypeScript also adds functionality such as types and annotations that do not have an equivalent in JavaScript. Hence not all valid TypeScript is valid JavaScript. Visually TypeScript looks something like this:


TypeScript is a superset of ES5, it wraps ES6, so you can use all ES6 features, and on top of that it adds types and annotations.


In TypeScript you can add static types to JavaScript code. Consider the following example:

As you can see the type of the ‘name’ parameter, of the greeter function, is  type string. The return value of the greeter function is a string as well. Trying to give a number to the greeter function results in a type error at runtime:

Types are not limited only to primitives, you can also use ‘classes’ as types:

In the above example you can see how the the array named ‘persons’ only accept objects of type ‘Person’. Basically how generics works in C# and Java and other strongly typed languages.


One of the reasons for the Angular team to stop working on AtScript is that TypeScript 1.5 promised to include annotations. Lets dive into what annotations can do by looking at some Angular 2.0 code:

Annotations are always prefixed with the @ symbol. This means that in the code above there are two annotations: @Component and @View. Note that you can also define your own annotations if you want to, @Component and @View are not “built” into TypeScript, they were created by the Angular team. What an annotation does is decorate a class with extra functionality in a very succinct way. With very little code you can add great a great deal of functionality. Lets look at the @Component and @View annotations from the example above to demonstrate this.


@Component tells Angular how it should recognize a component. In this case that it should recognize a PersonComponent whenever it sees an HTML element called person. So if you have the following code inside of a HTML template:

Angular will instantiate a PersonComponent.


The @View annotation tells Angular what the template for a particular component is. In the case of the code snippet that defined a PersonComponent, the templates is a HTML paragraph (<p>) with a binding to ‘name’. Note that you could also put your template in a separate file and use templateUrl to retrieve the template, just like you could do in Angular 1.x.

Benefits of Types

We have now seen some TypeScript in action including static types and annotations. But what makes ‘types’ so great? After all we have been using JavaScript for years without the need for types at all. So why have types at all?

IDE and text editors love types

The more static information you provide an IDE the better it can help you write your code. Static types enable better autocompletion, better refactoring support and better code navigation. For example: when your editor sees you writing a function and it knows that it takes two numbers and returns a string, it can show a popup window with that information:


For the TypeScript team this extra productivity gain is very important. They even provide ways to autocomplete code that was not written in TypeScript. They do this by creating files that annotate other open source libraries or frameworks by defining TypeScript interfaces for them. These files have the .d.ts extension, where ts stands for TypeScript and .d. for definition, and can be used to make your life easier. There is even a GitHub repository with high quality .d.ts files:

Types help you show your intent

Types are not only useful for IDEs, you the human programmer has benefit from them as well. Having type information makes it easier to reason about other people’s code, and even your own code three months down the line.

Is TypeScript required?

TypeScript is not required by Angular 2.0 you can still write ES5 or ES6, and even Dart and never use TypeScript at all. Which is also what the official docs at are saying, in fact they show ES5 examples next to every TypeScript example. That being said I think writing ES5 or ES6 is not going to be feasible, because I think every tutorial on Angular 2.0 is simply going to assume you use TypeScript. So if you insist on using ES5 or ES6, you will constantly have to rewrite TypeScript examples from the web back to ES5 or ES6 yourself.

I think it is best that you bite the bullet and use TypeScript. But since TypeScript is a superset of JavaScript you can choose when to use TypeScript and when to use pure JavaScript. You can mix and match as you please, this is especially handy when migrating from Angular 1.x to 2.0, but more on that in a later blog post.

In what language is Angular 2.0 is written?

Ever wonder why Angular 1.x is called AngularJS and Angular 2.0 is just Angular 2.0 sans the JS? That is because Angular is no longer just a JavaScript framework, instead it supports multiple languages. Angular 2.0 will support: ES6, ES5, TypeScript and Dart. Dart is a Language by Google that was supposed to transplant JavaScript as the scripting language of the browser. Recently Google announced they will not add Dart to Chrome but will transpile Dart to JavaScript instead. So what do they write Angular 2.0 itself in? The answer is in TypeScript, but they have Dart and JS Facades that help compile Angular 2.0 to JS and Dart versions. Here is a infographic from the Angular 2.0 team that shows how that works:

Angular 2.0 pipeline

You can read the graph as follows: Angular 2.0 is programmed in TypeScript (utmost left) and there are two facades one for JavaScript and one for Dart. The purpose of these facades is making it possible to write idiomatic API’s for code for both JS and Dart. This means that both languages get Angular’s API in a form that is best suited for that language. From there the traceur compiler outputs JavaScript and Dart versions of the framework.

When you write Angular 2.0 in Dart you write your application with Dart, using the Dart Angular API facade. This is what the two yellow ‘Dart’ blocks represent in the lower part of the graph.

When you choose to write your Angular 2.0 code in JavaScript you can choose between ES5, ES6 and TypeScript. But you will use the JavaScript API for all three of them. This is what the blue ‘JS’ part in the top part of the graph represents.

One CLI(hopefully) to rule them all

A colleague of mine asked my while he was reviewing this blogpost: “What do I have to do to use TypeScript in Angular 2.0?”. Which is a valid question, after all we have seen this complex graphs with all these facades, but we have no idea on how to use it in our projects.

The answer is that there is no answer yet on how to best build an Angular 2.0 project. But there is hope, the Angular team got together with the React team to discus common ground, and in the notes, Igor from the Angular team, discusses the need for a Command Line Interface (CLI). He states that the Angular is building a CLI that will, and I quote:

  1. Scaffold.
  2. Skeleton files
  3. Set up build
  4. Set up testing environment

By “Scaffold” I think Igor means generating entire base Angular projects, and by “Skeleton files” generating very specific files such as unit test, e2e test and services. “Set up build”  probably mean setting up TypeScript, Dart, ES5  or ES6 depending on the language that you choose. “Set up testing environment” means that it will setup karma and protractor for unit and e2e tests.

The Angular 2.0 team took a page from the Ember playbook, because Ember has had CLI for quite some time. The effect of having a CLI which has first class support and is created the Ember team itself, means that every Ember application out there, uses the same infrastructure to build Ember applications. Plus the Ember build system supports plugins, built on top of the ‘default’ CLI. This makes for a very powerful standardised way to build Ember applications. Having a big community that uses same tools makes these tools better.

From the meeting notes it is clear that the Angular team is working with the Ember guys to kickstart their own CLI:

We’re working with the Ember CLI team who are extracting reusable bits.  Working with Joe from broccoli and reusing those bits. Current changing the Angular build from gulp to broccoli. Working with the NPM team on package management and resolution. The package managers that exist today aren’t good, but NPM is the closest of all of them.

How this CLI will work exactly is currently still unknown, when there is more information available expect an update from me. That being said I think this is the Angular CLI is a very positive development for us, the Angular community..

Want to know more?

TypeScript adds a lot more functionality on top of JavaScript which I have not covered in this blogpost. Here are some TypeScript resources:


TypeScript was included into Angular 2.0 to allow us to statically define types, which help us write more readable code. TypeScript also includes annotations which allows us to write very little code but achieve much. It gives our IDEs type information to help us be more productive by providing better autocompletion.

So even though you are not forced to use TypeScript I definitely recommend that you do.

We’ve also seen that Angular 2.0 is no longer a pure “JavaScript” framework but that it supports multiple languages: JavaScript (ES6, and ES5), TypeScript and Dart. The new “CLI” will hopefully make it easy to setup Angular 2.0 projects in a way that the whole Angular community can benefit from it.

In previous weeks and this weeks we have been looking at some mechanical changes in Angular. Things that simply change the way we write Angular, next week we are going to look at components, which will change the way we think about Angular.

The Road to Angular 2.0 part 2: ES6


I gave a presentation at the GOTO conference in Amsterdam titled: The Road to Angular 2.0. In this presentation, I walked the Road to Angular 2.0 to figure out why Angular 2.0 was so different from 1.x.

This series of blogposts is a follow up to that presentation.



Last week we discussed the new template syntax in Angular 1.x. This week it is time to discus ES6 and how it affects Angular 2.0.

ECMAScript 6 is the next version of JavaScript. The specs have been frozen and now it is up to the browser vendors to implement them. ES6 brings us some exiting new features. Let’s take a whirlwind tour and look at some of them.

Whirlwind tour

Fat Arrows

JavaScript is becoming more ‘functional’ in each iteration. ES5 added: map, reduce, filter and more. These functions take other functions as arguments. The functions that are passed in as arguments become less readable when they are inlined. For example this is quite verbose:

With ES6′s ‘fat arrow’ notation, writing lambda expression (anonymous functions) becomes really easy:

Fat arrow was created to allows us to write really short function definitions. Let’s look and break down another example:

Here you can see the function ‘square’ being defined as: x => x * x. What this says is: define a function with one parameter called x, which evaluates to x * x. The value of the expression is implicitly the return value, so there is no need for a return statement.

You can also define functions which take multiple parameters like so:

When creating a function with multiple parameters you must define them within parentheses.

You can also have multiple statements within a fat arrow by using brackets:

The fat arrow also has one other nice property: it doesn’t change the ‘this’ context. Compare and contrast the following ES5 and ES6 code:

setTimeout normally changes the ‘this’ context, which is why in ES5 you often bind ‘this’ to some variable for later use. In the example this variable was called ‘self’. The fat arrow keeps the outer ‘context’ of where it was called, in the example above ‘bert’ would be ‘this’. This makes ‘this’ act a little more as you would expect it to work. For more info see:


In ES6 you can define constants which cannot be reassigned via the ‘const’ keyword:

Constants cannot be reassigned, but they can be changed:

Constants are lexically scoped:

NAME is created inside the scope for the greet function. Outside of the greet functions NAME is not defined.


‘let’ is a lot like ‘var’ except it is scoped to the ‘nearest’ closing block. For example:

Here you can see that ‘x’ is only available inside the ‘if’ block because that is where the ‘x’ was defined. If x was defined with a var however, the number 10 would have been printed twice. So let allows you to scope variables more tightly.

However let definitions are accessible in child scopes:

Redefining a let in a child scope does not affect the outer scope’s let definition, because a let is defined per scope, for example:

When you try to redefine a ‘let’ in a child scope by using the ‘let’ from the parent scope you get a ReferenceError:

This can be explained because in the statement: let x = x + ‘ world!’ the second ‘x’ refers to the ‘let x’ before the statement, and not the let x = ‘hello’ in the scope above. Within x + ‘ world!’ the let is still uninitialized  which causes the error.


Destructuring makes it easy to get values from complex objects and assign them to variables.

For example to get certain properties from an object and assign them:

You can do the same thing for ‘positions’ in an array:

You can use destructuring on a functions parameters too:

In the example above you can see both array and object destructuring happen in the moveBy function. What makes destructuring powerful is that it allows you to program to the ‘shape’ of the data structure. One thing about destructuring objects is that you can name the binding whatever you want. For instance, you could rewrite moveBy to this:

Whatever the ‘value’ of the key is becomes the binding for the variable in the function. So

what  {x: oldX} says is: there’s a key called x in the first parameter and I want to name it oldX.


JavaScript has prototypical inheritance, which makes it stand out from other languages, which use the more traditional classical inheritance such as C++, Java, Ruby, Python, C# and Objective-C. People coming from those languages would often create libraries that would use JavaScripts prototypical inheritance to mimic the more traditional classical inheritance.

ES6 gives us some syntactic sugar to make the more traditional classical inheritance possible, without having to use a library. It is important to note that behind the scenes ‘classes’ are still implemented using prototypical inheritance. Here an example:

The function ‘constructor’ is the constructor for that class, you cannot add multiple constructors via method overloading.

  •  ’super’ is used to call the parent constructor, in Human’s case that is Living.
  • The ‘get’ before isAlive means isAlive is a computed property. See get. This makes this.isAlive possible without parenthesis.
  • The ‘set’ before alive means you can set the value via assignment. This makes human.alive = false possible.
  • You can only extend one class at a time, multiple inheritance is not possible.


Generators are complex creatures that allow for some pretty awesome functionality. I doubt that you ever need to write a generator yourself, but frameworks creators can use it to make your life easier.

So what is a generator? A generator is a function that can be paused mid execution to give or receive values. It does so via the ‘yield’ keyword. Lets look at a simple generator:

In the example above we define a generator called threeCounter, it will give a number when it is called, after it has been called three times it is done. When you call you are given an object, the object has two properties: value and done, value is what the generator yielded, and done is a boolean which says if the generator has any new values to give.

You can instantiate a generator as many times as you want:

Each generator you create acts independently from other generators of the same type. I would like to say that calling a generators creates “instances” of that generators, like calling new on a class would. Perhaps it would have been better if generators were created with the ‘new’ keyword as well.

A generator is also an iterator, this means we can use it inside for each loops:

You can make generators that never stop providing values, for instance here’s a generator which creates class names for a zebra striped tables:

So even though the zebraGenerator has a while(true), it doesn’t run in an infinite loop, it stops each time there is a yield and provides the caller with a color.

We’ve seen how we can get value’s from a generator but we can also provide generators with values:

I know this example above is kind of contrived, but it demonstrates how to send values from the outside to the generator by using You can also see that you can pass parameters to the generator function itself. In the above example I gave the string ‘massiveCalculation’ as a parameter, so the printer could make a nice header.

Passing values to generators is typically something library creators use to make our lives easier. For example:

This is from a library called js-csp, with it you can create Go like channels. In the example above

a channel for ‘mousemove’ events is created, and it is consumed using yield to print the location

of the mouse. With channels you can implement consumer and producer like patterns to manage

asynchronous events.

Another cool example uses generators make asynchronous code look like synchronous code:

This “co” function comes from the Co library, it lets you yield promises to “co” so it can handle the asynchronous parts of the code. It will resume running the code once all promises are resolved, this way you don’t have to write the then or error functions. This makes the code look synchronous, which makes the code easier to understand.

Here is a really exhaustive look at generators from ES6 guru Dr. Axel Rauschmayer.

Of course Co is just a bridge until ES7′s ‘await‘ syntax arrives!


So there is a lot of cool new stuff in ES6, but there is still one problem: how are you going to share all the classes, generators, and variables you have made? Until a couple of years ago the most common way was to give people a JS file and namespace your code, something like this:

This way you had private variables and created an API you exposed to some global variable. There are many downsides to this approach:

  • Name clashes with if some other library uses the $ sign other than you
  • Cannot import specific functions, you must take everything.
  • Cannot load modules programmatically / lazily.

Luckily ES6 has added support for creating modules. Let’s define an ES6 module:

We can then import the module above in a couple of ways:

The examples above show how versatile the new import syntax is. It is easy to prevent name clashes because there are so many ways to rename imports.

Want to know more?

Here’s a list of resources with even more examples. I recommend going through the first two: (exhaustive look at generators) (technically ES7 but it is to awesome to ignore)

ES6 and Angular 2.0

By now you have a pretty good idea of some of the features that ES6 adds to JavaScript. So what does it have to do with Angular 2.0?

The first thing is that Angular 2.0 will use classes a lot more instead of functions. Everything from Directives and Services will be classes in 2.0.

But the most important thing is that 2.0 uses ES6’s module system instead of having a custom module system that 1.x had. This greatly affects the way we write the JavaScript part of our Angular 2.0 code.

Sneak peak

Here is a small example on how you would use modules in Angular 2.0:

Angular 1.x’s module system

So what was wrong with the 1.x module system? Lets look at an example:

In the module definition above we see a factory called “userFactory” being assigned to the

“users” module. The “userFactory” has a dependency on the $http module that Angular 1.x provides.

The first downside to the Angular 1.x module system is that it is string based. This makes the module system brittle: one spelling mistake and the whole thing falls down like a house of cards.

The second downside is that in order to survive minification (jsmin) you must declare all dependencies inside of an array as strings. This is why ‘$http’ is declared inside the array as a string, and as $http, the variable, in the function. You can use ngAnnotate so you don’t have to write this code manually, but it is still a hassle.

The third, and most important downside, is that Angular 1.x modules only work inside the Angular world. If you have found a great library that was written in pure JavaScript without Angular in mind, you must jump through hoops to get it working inside Angular. The same is also true in reverse, if you have a great Angular module and you want to use it outside of Angular, you are going to have to rewrite the code.


By embracing ES6 and its module system it will become much easier to use existing non Angular JavaScript code in an Angular project, and vice versa.

This is not only true for Angular but also true for other frameworks as well such as Ember, React and Knockout. Sharing code between frameworks is going to be easier than never before. ES6 modules will act as a bridge between frameworks and the greater JavaScript world.

I hope that the ES6 modules system will unite the JavaScript community.

So in conclusion when you hear about the death of the Angular 1.x module system, thats a good thing. We are getting a great alternative in ES6 modules in return.

Next week we will look at Types, and why the Angular team thought ES6 alone was not enough!

The Road to Angular 2.0 part 1: Template Syntax


A couple of weeks ago I gave a presentation at the GOTO conference in Amsterdam titled: The Road to Angular 2.0, in this presentation, I walked the Road to Angular 2.0 to figure out why Angular 2.0 was so different from 1.x.

This series of blogposts is a follow up to that presentation.

The Road

When the first details about Angular 2.0 emerged, my initial response was: “Wait, what?!” So many things will change from version 1.x to 2.0, is it even Angular?

So I started digging through design documents, meeting notes, blogposts, and watched ng-conf videos. I quickly discovered a theme: The web will fundamentally change and Angular must evolve with it.

Web Components are coming, ES6 is around the corner, TypeScript was invented. This series of blog posts takes you through these new innovations and shows you how they have influenced Angular 2.0’s design.

I like to visualise all of the changes from Angular 1.x to 2.0 as a road. On this road we will come past various places that represent changes to Angular 1.x. Throughout this series of blog posts we will visit each of these places, and dive into how and why they have influenced Angular 2.0’s design. Here is the Road to Angular 2.0:


Template Syntax

Angular 2.0 has a new template syntax, which is radically different from 1.x. These changes made to the template syntax caused a strong negative reaction amongst the Angular community. It seemed like the Angular teams changed the heart of Angular for no good reason.

Now it seems we must learn Angular 2.0 all over again. But fear not, once you understand how the new template syntax works, and you know the reasoning behind it, it will make sense.

The new binding syntax

Let’s look at the differences in the bindings syntax between the two versions of Angular:

Angular 1.x

Angular 2.0

The example above shows a username an image and an up-vote button.

The first thing to note is that the first line is exactly the same in 1.x and 2.0. String interpolation is here to stay, so at least that part is still familiar.

The second line of code shows us the first difference in the new template syntax. Instead of using the ng-src directive in Angular 2.0 we see [src]. The brackets represent a binding to a property, this means that when the value changes in the “controller” the value is updated in the view as well.

The third line shows the new event syntax, whereas we used the special ng-click directive in 1.x we simply surround the event by parenthesis in 2.0.

Why change the binding syntax?

The main reason the syntax changed is unification. Lets look at the following line of 1.x template code:

In this code we use ng-src to fetch a template. If you take a step back from Angular and look at the code as a novice, who doesn’t know Angular,  you could ask: Why not simply write:

The reason of course is that the browser will try to fetch: {{ user.imageUrl }} from the server. This is because the browser doesn’t understand Angular’s string interpolation syntax.

So in Angular 1.x the team worked around this by introducing ng-src. The browser doesn’t recognize ng-src as the property that represents url to the image, so it leaves it alone. Angular can then, under the hood, write the actual “src” property once the binding can be resolved.

In Angular 1.x ng-src is not the only directive that does this, in fact there are many more: ng-blur, ng-click, ng-hide, ng-show, ng-disabled, ng-selected. All of these directives were made so Angular doesn’t get in the browsers way and vice versa. So for each property the browser has, a corresponding Angular directive exists.

Why is this so bad, let’s say for example that tomorrow all browsers include the following way to include HD images:

What does Angular 1.x have to do to make that work? Write a specialized directive of course! In an ideal world Angular would work with new HTML properties out of the box, without having to change Angular’s code.

In Angular 2.0 the core team decided to tackle this problem at the root. By making one unified syntax for all properties. That’s where the bracket and parenthesis come from. So looking at the following line of code again:

I would like to read this as: Create a property called “src” with the value of the expression, and update it whenever the value changes. The part between the brackets: “src” is just the name of the property Angular 2.0 must render on the HTML element.

So if “src-hd” was introduced tomorrow, I could write this in Angular 2.0:

The best part is that, unlike Angular 1.x, Angular 2.0 would not have to be updated itself. So Angular 2.0’s template syntax unifies all of the built in directives from Angular 1.x into one syntax.

Benefits of the new binding syntax

The first benefit as you have already read is that Angular 2.0’s template syntax is more future proof than Angular 1.x’s.

The new syntax is also easier to learn for beginners. If you already know HTML and you work as a web designer and suddenly you are dropped in an Angular 2.0 project, you simply need to learn to write square brackets around HTML properties you already know. There is no more need to learn al of these specific cases such as ng-src. The new syntax is simply closer to HTML than before.

Another benefit of the new syntax is that it is easier to reason about. What I mean by reasoning is that it is easier to understand a template just by reading it. For example what does “selected” do in this Angular 1.x directive?

It could mean one of the following things:

  1. It selects a certain marker based on the outcome of function “markers”.
  2. It is an event that executes a callback to “markers” whenever a marker is selected.
  3. The “selected” property is a two way binding that changes through the “markers” function.

In order to know which one of the above answers is correct you would have to read the definition of the google-map directive.

If this was an Angular 2.0 template:

Now it is immediately clear that “selected” is an event because of the parenthesis. You would not have to read all of the surrounding code to understand what something does.

Local Variables

Angular 2.0 templates brings us a new feature that was not previously seen in Angular 1.x. This feature is called local variables, it allows us to create variables that are only available in a specific template.

The reason for wanting to create variables that are only visible in your template is so you can create multiple templates for the same “controller”. Imagine if you had to make a page with a YouTube player component, that needs to work on mobile and desktop. You discover two great Web Components: one that works great on desktop and another that works great on mobile devices. So you create two templates: one for desktop and one for mobile. But now you might need two controllers, because you have two different youtube components, right? The answer is no, because Angular 2.0 allows you to create ‘variables’ directly in your template.

The syntax for creating a local variable is simply a name and a hashtag.

Lets look at an fictitious example of the mobile template:

Now take a look the desktop’s version of the template:

In both cases you can see we define a local variable called #player which we use to reference the ‘player’ HTML elements. In the play buttons we can then reference “player” in the (click) event to start a video. Note that the API to start a video is different between the desktop and mobile version. So even though the API is different we didn’t have to touch the controller at all, thats the power of local variables!


Angular 2.0 also introduces a new concept called ‘directive templates’, a directive template manipulates HTML. Lets look at an example:

If you know Angular 1.x you will find *ng-for familiar, it does what ng-repeat used to do. *ng-for manipulates the HTML by repeating the HTML element for N times.

Note that we create a local variable called #name that we reference inside of the <li> element. The #name variable is only available inside of the <li> element, because it is scoped to the template.

Another example of a template is *ng-if:

The reason this is called a ‘template’ is because behind the scene’s Angular 2.0 will convert the code to a <template> tag. So the *ng-for example would expand to:

A <template> element represents an inert piece of DOM that the browser will completely ignore. This gives frameworks such as Angular an easy way to define templates, without the browser trying to parse them and mess with them. The <template> element was basically created for use by JavaScript frameworks.

The benefits of the new *template syntax is that your IDE and text-editor can analyse these final <template> forms of the directive. This means that they can autocomplete your code and provide your with better help. This will ultimately make us developers that use Angular 2.0 more productive.

Codifying the new syntax

We can codify the new syntax as follows:

Property bindings []
Square brackets represent a property which has a binding to a value. This binding is always an expression. Angular evaluates it every time inside the run loop when it is dirty checking for changes. Whenever a change is detected the binding is updated.

The expressions should be  pure: each time they are evaluated with the same parameters they should return the same value. They should not cause side effects.

Events ()
Parenthesis represent events. Events handlers are statements, which cause side effects. The events always originate from actions taken by the user, such as: hovering the mouse or typing on the keyboard.

Variables #
Hashtags represent local variables. These are only available inside of the template where they are defined. They can be used so different templates for mobile and desktop can contain completely different pieces of code, but still keep the same controller.

Templates *
An asterisk represents a template with your HTML that is expanded to a <template> element behind the scenes. They were created so IDE’s and text editors can better autocomplete the code.


The new syntax makes it easier for newcomers to learn, because it more closely resembles HTML, and because you do not have to learn the built in templates. It also makes it easier to reason about templates so we can more easily discover what a template does.

Next week we will take a look at ES6, the new version of JavaScript, and how it affects Angular 2.0.

CORS with Spring MVC

In this blog post I will explain how to implement Cross-Origin Resource Sharing (CORS) on a Spring MVC backend.

Ebase Xi – Unsafe by Default – XXE

In my previous blog post I questioned the safety of the default configuration of Ebase Xi. I knew then that something was wrong as I had already found and reported two vulnerabilities to Ebase. But nothing happened. On the 6th of march, much to my surprise, I got an official Ebase security alert informing me that ‘All Ebase Servers are vulnerable to XXE attacks’. Which was one of the two issues I originally reported. Now that its public knowledge you can read this post for full details.

In-memory MongoDB for unit and integration tests

A few weeks ago I found myself having to fix a bug in a production system which uses MongoDB as its primary means of storage. As I was unfamiliar with the codebase, we had just taken over the project, the first thing you do is trying to find the test covering this functionality.

Jaw drop; no test in sight. What was the case, none of the interactions with the backing storage was under any form of testing. So it could happen that a simple aggregation query wasn’t returning the expected results

This was my first project in which I used MongoDB, coming from projects using HSQLDB to test the validity and outcome of queries, the first thing that flashed through my mind was in-memory MongoDB. The first hit on Google wasn’t promising, but luckily some following results hit the jackpot.

CSRF / XSRF protection using Spring Security

The last few years there is an almost constant stream of news articles about some company leaking customer information one way or the other.
While not all of these leaks are caused by badly protected websites themselves, a lot are caused by misconfigurations in the web/data servers, programmers still have a hard time integrating some basic protection against attacks.