Agile: A Misunderstood Methodology

Sounds great doesn’t it. You go to an interview and the interviewer says “yes of course we’re using agile”, and it becomes clear on the first day that actually they are not using Agile. Not the conventional interpretation at least.
The trouble is that the term ‘Agile’ has a bit of a project manager feel about it. ‘Blue sky thinking’  and ‘giving 110%’  in a ‘thought shower’. You know the sort of thing. In the mind of a non technical manager, its the sort of term that evokes thoughts of speed, dynamism and efficiency. Ultimately, getting more delivery for less cost, and the ability to change direction at the drop of a hat. 

Some of that is true but only to a certain extent, and critically WITHIN REASON. To qualify that, manipulating the backlog because priorities have changed is doable with minimal impact. A full on mid sprint U turn will annoy almost everyone, nacker the burndown,  delay delivery and potentially compromise quality. Worse still, not having a proper backlog and dreaming up new requirements on the fly also carries a massive impact. This is not agile. This is the headless chicken approach to software development, I (and others) dub ‘Fragile’.

Unfortunately, I speak from experience of the effects of these non-Agile practices. Agile is not an excuse for not having a plan, it is not a license to change requirements with reckless abandon and nor is it a way of getting something for nothing.

I therefore feel its worth reflecting on Agile first principles to understand why this is the case and where the perceived flexibility and speed (should) actually come from and maybe help my fellow development community avoid similar pitfalls.

Flexibility

Agile is like a series of many short projects (sprints), typically 2 weeks. The project owner will have a long list of requirements (stories) collectively called the backlog. At the start of each sprint and in partnership with the owner, a subset of those requirements are selected for the coming sprint. We estimate the stories with a view to delivering a working piece of software that meets the criteria, ready to deploy by the end of the sprint. Its the accumulation of these working pieces over the course of many sprints that form the finished product.

Once the sprint has started, you leave it alone. You can do whatever you like with the backlog though, and that is where the flexibility comes from. As a project owner, there is medium-long term flexibility if priorities or requirements change. As a developer, there is sufficient short term stability to work effectively and deliver a working product. Everybody wins.

No massive functional specification to re-write when something changes on the business side – just a few extra stories on the backlog, and because the project owner sees progression more frequently problems on the development side are quickly identified and remedied.

Speed

Perceived speed is a product of flexibility. I say “perceived” speed because agile or otherwise, the amount of work is the same. The perception of speed doesn’t come from doing the same work quicker but from not doing the same work twice.

There isn’t that heart-stopping moment you might get in a traditional (waterfall) project as the final product of 6 month’s work is unveiled to the project owner and what’s been built isn’t what was expected. Some ambiguity or false interpretation of the specification has led a well-intentioned development team off on a tangent. Subsequent remedial work is generally required both to better articulate the spirit of the requirements in the specification and rework the product itself.

The agile approach relieves this problem with continuous delivery and promoting regular face to face communication. This encourages developers to ask questions where tasks are not clear and allows project owners to quickly identify and resolve any emerging tangents before they have a significant impact. This makes it far more likely that the product will be right the first time and no expensive reworks are required.

Finally…

In my opinion Agile is a great methodology for software development projects, but like software itself, the key to success is in how its implemented. Understand what agile is and how to get the most out of it. Be wary of projects claiming to be agile that have poorly defined requirements and/or volatile priorities. As far as requirements are concerned, agile affords manipulation of the order of well-defined requirements, refining them as you go, NOT winging it.

I’ll finish by repeating my initial points; A full on mid sprint U turn is NOT Agile. Worse still, not having a backlog and creating requirements on the fly is also not Agile.  That’s just fragile and it doesn’t end well.

Continue Reading

Change for the change resistant

This has been a week of considerable progress for my current client and of significant personal achievement. 

The story is that encouraging and actually getting progress in an organisation that is quite married to bureaucracy and manual processes is extremely hard. This week however I’ve enjoyed some success in this regard. 

I can’t imagine I’m the only bright young(ish) mind who’s found themselves frustrated by an employer or client stuck in their ways. It’s easy to become disillusioned watching an organisation talking themselves out of improvement to processes and technology, and instead favouring an old fashioned, labour-intensive, expensive approach for fear of rocking the apple cart. 

The truth is if it was up to me I’d bin the lot. Rewrite the entire eco-system in something modern and sexy, then ride out the transitional bumps in the road. That may be a bumpy road initially but its in a better direction long term. 

The reality is of course that while that might be ideal from a tech point of view,  its not commercially viable. Whatever the gains, customers and management can’t tolerate that sort of thing. 

That said, do not underestimate the power of incremental change. An old Blue Peter saying comes to mind ‘think globally, act locally’. In my particular case a little bit of pull request action here, a robust unit testing strategy has made a massive difference and it’s only taken days to start realising the benefits. 

Something else happened too. Others have become impassioned about it and are joining the push to improve. The developers on my team have really got into it. Useful alliances have emerged with the team responsible for the source control, continuous integration and deployment systems. The favourable effect on our weekly management report has been noticed and our new approach is being replicated for other project teams and the github enterprise upgrade has been prioritised. 

That’s taken about a fortnight. In a traditional organisation, that’s fairly swift movement, and perhaps more importantly the alliances I’ve made open the door to further change. 

Its still not exactly bleeding edge but it is a big step on the journey and there’s a sense of momentum and direction that wasn’t obvious before. 

I suppose the message to anyone else in a comparable position is that building alliances is key. Start with small changes that will yield demonstrable improvements in quality or cost. Things that everyone – not just developers – can see the value in. Don’t try and change the whole universe upfront. Start with your project and maybe the universe will follow. 

Continue Reading

Twitflow: How not to source control

Source control in its many flavours is an extremely important part of any programmers toolkit. Saving you from yourself, saving you from others and of course sharing and collaborating with your fellow man (or woman). 

In spite of its importance however, I have encountered some pretty special howlers that I share here for your amusement. When the giggling abates, consider this a DON’T list on source control. 

Calling a Git branch HEAD

Yes this happened. Not sure how, not sure why. If you’ve ever used Git you’ll know why this is a really bad idea, and I refuse to credit it with any further explanation. Just don’t. 

Use VSS 

Sourcesafe first hit our desktops in 1994. There was a facelift in 2005 but the fundamental concept remained the same. 1 branch, exclusive checkout. Forwards, backwards. Those are your options. 

To be fair, the organisation in question is not alone in their continuing use of it, but VSS should have been retired many years ago. When it was released it was better than nothing at all…  but then progress happened. Move on. 

Anyway, Sourcesafe’s plethora of limitations became particularly evident this week when breaking changes were made to some shared core libraries. With no branches there’s no choice – everything was getting a taste of that change. And everything not immediately important to the developer concerned didn’t like that taste. The time, effort and ultimately money involved in dealing with this demonstrates precisely why you don’t need Sourcesafe in your life anymore. 

Microsoft agree btw, it’s finally out of support in July. And not a moment too soon in my opinion. 

The detached HEAD

This is almost as unhealthy as it sounds. There are legit reasons why you might have such a condition locally (programmatically as opposed to cranialy), but for goodness sake don’t go and force push it! 

Continue Reading

Building a Basic NodeJS Web App with Visual Studio

Introduction

This is my first foray into the intriguing world of nodeJS, and as a programmer on the Microsoft stack I wanted to do it in Visual Studio… Its a comfort zone thing. 🙂

I struggled to find an example of a simple working web app ready to go with the basics like bootstrap, jquery on which to experiment and build. So I made one.

Like with so many things, what started off as a quick tentative look at nodeJS turned into a much longer look into nodeJS, Jade, ExpressJS and Grunt, but ultimately I’ve managed to get a straightforward example working which can be cloned here: https://github.com/joelblake1/NodeWebApp.

Solution Details

Built with Visual Studio 15 Preview 4, this is a nodeJS web app using ExpressJS and the pug templating engine (formerly known as jade). JQuery and bootstrap are pre-configured from npm.

There is a gruntfile which is responsible for copying the JQuery and Bootstrap scripts and css files from their respective locations in node_modules to a the ‘content’ directory in the route.

I encountered a slight complication in that Jade has recently been renamed to Pug. Even in this preview version of Visual Studio there is no template with the .pug file extension. If your happy enough not using express (ie: just use pug and Node’s built in http module) then you can render .jade files using the pug package. Not so if you want to use ExpressJS (or rather I couldn’t figure out how).

I wanted to use ExpressJS as the main server.js file is much simpler and more elegant, and I didn’t want to sacrifice the intellisense by using a blank Visual Studio template (and suffice to say admitting defeat and reverting to jade wasn’t going to happen!). I therefore created my view in a folder called ‘views.jade’ and created a new Grunt task that fires post build to rename the files to .pug and copy them to the ‘views’ folder (ExpressJS looks for pug templates in a folder called views by default).

There isn’t a whole lot to look at but it does give you a starting point with ExpressJS, pug, Bootstrap and JQuery all wired up ready to go. Feel free to clone have a play with it and let me know any suggestions. 🙂

Continue Reading

Document vs Relational Databases

Document DBs store data as “documents” which can be any data you like.

However relationships can be created between common elements (eg: author may exist on a document of type Book as well as type article.

Ideally suited to scenarios where the structure of the data is highly volatile or is simply unstructured or partly structured.

Example: SurveyXML contains a Binary Large OBject (BLOB) of Survey answers. There is some common ground but between different survey types but the structure has evolved. There are also totally different structures like the Green Deal Assessment (type 81).

Mining “Big Data”

In SQL Server this data is totally impenetrable. For example, (assuming the data wasn’t zipped) a query to find the answer to a question across a sample of 500,000 surveys of varying type would be very difficult and very slow in SQL. This sort of query would be up to 20,000x faster in MongoDB with the data in a comparable BSON format [i].

SQL Server Xml queries get exponentially slower as the volume of data increases. As a direct result we can draw few useful conclusions from the mountains of data surveyors have collected beyond the occasional recall of individual records. For the most part that data lies in wait and the majority will never see the light of day.

Document databases like Mongo are designed to perform searches over large volumes of unstructured data very quickly. That capability could potentially open up the data and allow us to get more out of it. For example, calculating averages and trends in sale and rental prices, regional uptake of energy-saving measures, etc. That sort of information could be very useful MI for eTech and could be a saleable service in itself.

Alignment with agile

Sticking with the same example, the structure of survey data has changed many times during the lifecycle of the SmartSurvey. That is not the exception – it is fairly typical of most strategic systems.

In SQL Server, if we wanted to maintain survey data in a relational way that would allow SQL Server to query efficiently, the data would have be split through many different tables, and each change in schema would require a schematic change to the relevant tables and a migration path for all existing documents which is difficult, risky (in a production setting) and time-consuming.

Document databases don’t constrain you to the structure you started with. You can at your whimsy start inserting documents with a different schema but (crucially) the integrity of data in the pre-existing format is not compromised so there is no need to migrate existing data to the new format. Furthermore, we can query across our entire collection where data is common to both schemas eg: QuestionId.

Finally, how many people would trust Entity Framework migrations with their life??

Alignment with web technologies

There are a wide range of document DBs supporting a variety of different document formats such as binary, XML and perhaps most interestingly JSON. Common with most of the IT sector, eTech is investing heavily in technologies like WebApi, MVC, WCF as well as various mobile platforms. JSON is generally the transport of choice that underpins much of this technology.

Importantly, JSON is NOT the data format that underpins SQL Server.

Consider a common scenario where we get data from the database to push it straight out over HTTP via WepApi/MVC. With SQL Server this process entails a series of conversions from SQL’s binary stream, mapping into a POCO, before being serialized to JSON. Some of these conversions are implicit and we have a lot of help from ORMs to facilitate the mappings but nonetheless there is an overhead to all this.

Consider the same, with a CouchDB database where data is stored as JSON. In it’s simplest form, CouchDB has a web Api built in…

GET /recipes HTTP/1.1

Host: couchdb:5984

Accept: application/json

Job Done. It’s just simpler! – No mapping required, no overhead, less code to maintain and less prone to errors. Better still because it’s a web api, I can call into the database directly from my client javascript.

Sticking with CouchDB, if you’re using the data in an application and you want to de-serialize your data into POCOs, there are a range of APIs you can choose from that will do this in familiar fluent syntax for you. You can use the de-serializer in the API. Alternatively you can get the raw JSON and choose to use any of a range of JSON de-serializers to choose from various venders. You are no longer locked into ADO.NET.

If you’re lucky, you’ll get a JSON data type in SQL 2016…

Alignment with OOP

Have you seen the mess Entity Framework makes if you attempt to store data from 2 derived classes? In a simple scenario of 2 classes derived from a base class you first have to decide whether you want table per hierarchy, table per class, table per concrete class, table per type. Some perform better than others. Some are simpler than others. There are performance, complexity considerations with each method.

Why is this even a thing?? Why do I have to tell SQL Server how to cope with a straightforward, mainstream data structure?

None of that bother in document dbs. If you can do it in JSON (and you can) you can do it in a document db.

Document Databases: The Catches

Relationships

Not really a catch but unsurprisingly, document databases manage relationships differently to relational databases. You therefore need to use them differently. Documents can be embedded in other documents and you can reference other documents. Complex multi-table queries do not perform as well as queries on one or few tables, and as such embedding documents is generally a better choice for performance, and quite often a “copy” of the dependency as it was at the time is perfectly acceptable, even preferable over a consistent shared instance.

The exception to this is where there are shared dependencies that must remain consistent between documents. Eg: if you change a shared dependency in one document, the expectation is that change is reflected in the subsequent document. 

Duplication

A typical document may contain an object graph that might have some embedded documents which may be repeated in other documents. Obviously, in a relational database we would try and optimise this by normalizing the data. This means that the databases are typically larger than a SQL Server equivalent. To what extent that is an issue in an era of relatively cheap storage is subjective.

Data Integrity

They are good at constraining and validating what you can store to preserve the integrity of data. You can’t rely on that in document databases.

That said, the elements of your POCO are strongly typed. Any dependencies (eg: static data) will generally have been selected from a list that come from the database any way. Do you need to validate all that again? I would suggest that if you are relying on the database to validate the types of data and integrity of your relationships there are other problems…

Furthermore, document databases are horizontally scalable which helps manage this problem (SQL server scales vertically).

Transactional Integrity

Most document databases support some form of atomic (per-document) transaction support. However, multi-document transactions are not universally supported. This is will come as the technology matures. There are now some document database platforms coming out that do support multi-document transactions (FoundationDB, RavenDB, etc). In time this will undoubtedly propagate to the majority of platforms.

Conclusions

Relational databases are relevant BUT they are a solution to a traditional problem. They are very good at maintaining consistent data integrity and their performance is optimised around querying well-defined, strongly-typed datasets with a stable structure.

That feature-set is an excellent choice for a waterfall, up-front designed project where deviation from the original design is strictly controlled. It is excellent for a flat POCO structure where one class can be accurately represented as one table. It is space optimised and highly performant even on the most complicated queries spanning many joins.

Where managing change (Agile), web technologies, big data and OOP are involved there are some headaches with SQL Server. We’ve papered over the cracks to a certain extent with technology like Entity Framework but those headaches often show through.

This is where document databases come in. They are a solution to these modern problems and out perform SQL when it comes to mining data from large non/semi-structured datasets, evolving your data structures throughout the lifecycle of your application to keep up with new requirements and Integration with prevailing web technologies.

Continue Reading

Quick Reference: C# Keywords

A few C# keywords including access modifiers, etc.

I compiled this list recently and it seems like the sort of thing that should be enshrined in blog. I’ll add to this as others come up. Feel free to leave a comment if there’s any glaring omissions.

Access Modifiers

Public – The property/method is visible within the containing class and to all classes in the same project, as well as any other projects that reference the containing project.

Internal – The property/method is visible within the containing class and to all classes in the same project. It is not visible to any other projects.

Protected – The property is only visible within the containing class AND to classes that inherit from the containing class (regardless of whether they are in the same project or a different referencing project).

Private – Visible within the containing class only.

It is also possible to mark a property/method protected internal. No prizes for guessing what that does…

Other important keywords

Overrides – Use this in a derived class to override a method/property on the base class (the one you’re inheriting from)

Abstract – An abstract class cannot be used by itself. You have to inherit/derive from it and use the derived class. Within your abstract class, you can also define abstract properties/methods, which have no implementation. This forces the derived class to implement them.

Virtual – A virtual method/property has some default implementation, but a derived class can optionally override its behaviour to some alternative implementation.

Sealed – Marking a class sealed prevents it being derived from. You may NOT inherit from a sealed class.

Static – In a conventional class, you would instantiate a new instance of it, use it and then dispose of it. Where a class is marked static you cannot instantiate it. Instead there is one instance shared by the entire application. It therefore can be useful for utility functions (such as null checking) or for global variables (for example, the inbuilt ConfigurationManager class is static providing access to the configuration file to the entire application). The one shared instance is instantiated when it is first called, and you can create a private constructor on a static class which will be called at that point. It is okay to have static members on a non-static class, but you may not have non-static members in a static class… and if you think about it, it wouldn’t make sense that way round.

Caution: There is a temptation sometimes to use a static class as a big bucket of variables that you’re not really sure what to do with. This is almost always a bad idea. It is impossible to tell what set those variables (and when) which can cause unpredictable behaviour, particularly in a multi-threaded scenario. They are are also very hard to unit test. For more info on static classes, check out the singleton design pattern.

For a class with none of those keywords, the default behaviour allows you to use the class as it is (ie: you don’t need to derive from it to use it), but you can derive from it if you wish.

extern – This is used when communicating with members in unmanaged dlls, typically combined with the DllImport attribute.

Attributes

Attributes are markers that can be applied to classes and/or class members depending on the attribute. One such example might be the serializable attribute, which allows us to pick specifically which classess and properties will be serialized (.Net has built in serializers for binary and xml).:

[Serializable]
public string Name { get; set; }

There’s a number of attributes built into the framework for various purposes and you can also define your own attributes. You can identify properties with attributes using reflection.

Builtin attributes:

ThreadStatic – Limits the scope of a static member to the one instance per thread (as opposed to one global instance)

DllImport – Indicates that the function that follows exists a different dll. Takes the name of a dll registered with the OS as an argument. Used together with the extern keyword.

Continue Reading

AngularJS & WebAPI/MVC: Enemy or Synergy?

Before getting into the meat of this post, I can’t not mention the fantastic week I spent at NDC. Great workshop, followed by 3 awesome days of insight into a multitude of interesting topics. Incidentally, there will probably be more posts coming from my direction on web components, OWIN/vNext and maybe even document databases (Apologies in advance for that little beaut Russ – our long suffering DBA!), but for now I’m interested in Angular and more importantly how that could fit into the existing Microsoft stack.

If you read my previous post, firstly well done! :), secondly it will come as no surprise that I’ve harboured cautious interest about all the hype over AngularJS, but until now it’s been relatively passive and non-committal. I come from a fairly pure Microsoft background. I’ve duly learned MVC and its many foibles and I have found the prospect of committing headlong into something new and non-Microsoft (such as Angular) quite daunting, hence my choice of workshop – I want a headstart if that’s the way the world is going.

I enjoyed the workshop and came away with a great starter on Angular, but as it progressed new questions developed. Performance: Critics say its not performant, Fanatics say its fine under all but the most demanding edge cases. I’m quite sure I could have got 3 different answers from as many people as to the maximum number of bindings per page that would yield a reasonable performance. Integration between the familiar world of Microsoft .Net and Angular: If we want to use Angular do we have to throw out all our time-honoured C# MVC code? How would non-Microsoft Angular going to be regarded by the big corporations of this increasingly risk averse world? Turns out I might not be the only one to ponder these questions and there is help at hand.

The Context

One buzzword you’ll hear a lot of in this arena is SPA (Single Page Applications). The premise is that minimizing postbacks improves user experience. The logical conclusion of that train of thought is let’s put the entire application on a single page, but that would be extremely hard in a loosely-typed, poorly structured, bloated language (such as javascript), but not impossible. Javascript frameworks help to abstract away some of the boiler plate/polyfilling work which in turn condenses the codebase. They dictate a structure to the script which aids readability, understanding, supportability and extensibility. Essentially, they are the tools to more effectively manage complicated logic on the client.

So which one do you choose? Every one of them does binding. Every one of them asserts some degree of structure over the client side script. Within that remit some frameworks have a particular affinity to compatibility, some have a particular affinity to syntactic beauty, some have a particular affinity to additional features and some have a particular affinity to performance. Of course there are trade-offs in every case. The judgement on what to work with comes down to personal preference, a bit of knowledge about your choices and your particular requirement. Most will work for most applications, and each one will do a slightly superior job for the particular story it is designed for.

So why pick Angular over everything else? AngularJS is stealing the limelight because it is feature-rich, has a graceful syntax (most of the time), it’s maintained by Google and performance should be acceptable in all but the heaviest  of cases (in terms of bindings). It is the one with the fewest trade-offs IMO. That’s not to say it doesn’t have any: A word of caution mooted numerous times by various industry commentators is that version 2 of Angular will be a whole lot different to version 1. That said, those same commentators also say that going from v1 to v2 will be easier than going from no Angular to v2. That’s progress for ya…

MVC vs SPA?

I can go and watch a workshop about MVC.Net and totally buy in to doing everything on the server (or at least I could have done 7 or 8 years ago). Similarly, I can (and did) go to a workshop about AngularJS and learn how to do everything in AngularJS. In either case, there is little said about integrating with each other.

This feels quite polar to me. MVC at the total exclusion of the emerging client side feels old-fashioned and short-sighted, especially as Silverlight isn’t an option anymore. But I’m equally uncomfortable with designing an entire line of business application in one Angular SPA. Call me cycnical but I’ve not seen it working in a full-on commercial situation such as ours. While I’m pretty sure it works I’ve not seen it with my own eyes, so committing to Angular fully feels risky. Why can’t I arrange my eggs in a few different baskets? Must I really throw my MVC baby out with the proverbial bathwater?

Enough of that… The point is there was a reason the server side prevailed all those years ago and I think it still has a place. I also think that place constitutes more than disparate abstract web apis for Angular to call into. The server side gave us a consistent environment to work in. The client environment was unimportant. We no longer had to concern ourselves with how good the client hardware was or whether the browser would support our javascript or what version of MDAC was installed (anyone else remember that??).

Furthermore, Google say AngularJS will take 10,000 bindings in a single page. Where do I stand if it turns out that comes with smallprint? The thing with client side javascript script is that it runs on the client hardware (funnily enough). Hardware has a significant bearing on application performance and, unless you are developing exclusively for intranet scenarios, client hardware is generally not that predictable.

That said, there is no doubt that the client side has become a lot more reliable since those dark days. Even on mobile devices now, you can expect fairly tasty hardware and you can normally expect it to run javascript. We can expect further improvements and the software industry will embrace that one way or another. There will always be a hardcore of users who insist on using an ancient PC with IE7 and they will complain the internet isn’t what it used to be back in the glory days (nor will it ever be on IE7). Don’t worry about those guys. Anyway, predictable client environments do not make the server redundant IMO, instead it just means the load gets shared and logically that seems prudent and feels comfortable.

So I’ve decide what I want and that’s the best of both worlds. I want to be able to build a solution and split my code logically between the server and the client.  Perhaps the complicated stuff that is easier within a strongly-typed, compiled language can live on the server, and I can indulge my penchant for tasty, postback-free UI logic on the client (eg: Fade this div in from the right which has some bound values in, but only if the value of txtFoo is ‘bar’ and option rab is selected from list ddlOof – I don’t condone Hungarian notation by the way. It just helps to visualize pseudo-logic).

MVC<AngularJS>

I might be stretching the metaphor of generics somewhat with that heading, but my point still stands. What about a third way? Why can’t MVC play nicely with Angular and make something beautiful?

They can. I imagine Google and Microsoft would hesitate before conceding this but getting Angular and MVC to work as a team is a really good idea. Miguel Castro describes a solution that comprises of a number of SPA “silos”, each dealing with functionality pertaining to a particular ViewModel. Damian Edwards describes a similar concept with “islands” of SPA functionality.

This feels like a pretty happy medium to me. It would offer sufficient logical separation to support large, complicated applications with a plethora of a different features. It minimizes potential performance concerns because a) not everything is on the same page; and b) not everything is being run on the client. That leads us nicely to fallback. What happens if the performance goes t!ts up like the Angular naysayers warn? Answer: You got a couple of options actually. One of those is splitting some of the work into another silo, another might be to move slow running behaviour back to the server.

In a nutshell, we’ve got it organized by MVC at a controller/viewmodel level (eg: Surveyors, Surveys, Appointments), but the actions are handled in the relevant SPA Silo by Angular (eg: Surveyors/Create, Surveys/Search, Appointments/Book).

In terms of implementation, I’ve seen it working and I’ve got Miguel’s example for anyone who’s interested. The key to unlocking the potential seems to be getting the routing right. It needs to be configured such that MVC doesn’t monopolise the routing. It just needs to produce a view model and serve up our host page. Angular will take it from there. This can be achieved quite easily using catch all routes in MVC (see the RouteConfig.cs file) and then adding specific routes into the routing table for each SPA (see the App.js files in App/Customer, App/Order and App/Product).

Once that bit is done you can concentrate on getting MVC to serve up a host page with as much or as little default functionality as you like. MVC gives you some neat sections in which to put your <script> tags.  and there will also be some web apis for Angular to call into. That leaves the view which you can treat thereafter as pure Angular.

While chewing this over on the train home on Friday night, it also occurred to me that there’s another advantage that’s particularly relevant to eTech. If you already know MVVM (the prevailing methodology for Silverlight), Angular prescribes a very similar structure. Sure, the syntax is different and the terminology can bit misleading (the Angular “controller” is more synonymous to an MVVM ViewModel than it is to a MVC Controller) but structurally there is an awful lot of common ground to draw on with MVVM. It’s not as daunting as it first seems and getting in to it may be easier than you think.

In conclusion, There is some cool stuff in Angular. Like anything, there are a few risks but there are ways of mitigating it. My opinion for what it’s worth is hybrid/SPA silo applications are the future for commercial line of business applications. Regardless of what is technically superior, there’s a lot to be said for public perception. Businesses are risk averse by nature. We all love progress, but personally I find some comfort knowing that old familiar Microsoft will be waiting in the wings (read server-side) if i manage to blow my foot off with Angular.

Sources:

Miguel A Castro: AngularJS for MVC Developers

Damian Edwards: AngularJS & ASP.Net

Scott Allen: Intro to AngularJS

Cory House: Web Components – The Dawn of the Reusable Web

Continue Reading

Hey babe, take a walk on the client side

Today is the first of 2 for me learning about AngularJS at NDC.

This for me is a particular highlight. Over the last few years or so (ever since the writing was on the wall for Silverlight to be honest) I’ve oft pondered what the future holds for rich web UI applications in this brave, post-Silverlight world. AngularJS is one of many javascript frameworks that could hold (or at least contribute to) the illusive answer to that question.

What’s it all about?

AngularJS is a javascript library. It prescribes a structure to your client side. It reduces the volume of code needed versus conventional javascript/JQuery. It brings concepts such as Dependency Injection (and therefore substitution, extensibility, unit testing and all that lovely stuff that’s long been the exclusive remit of the server side) to the client.

That’s all good. In my experience, the client side has always been a rather unstructured, loosely-typed, lawless world governed almost wholly by the discretion of the author. The issue therein is that it is inconsistent and it is entirely probable that the manner in which I structure my script will differ from that of the next developer. There are no proper rules and this inevitably leads to problems extending and supporting the code down the line.

As regards the volume of code, Angular’s directives, scopes, models and elegant apis dramatically reduce the volume of code, versus the same in conventional javscript/JQuery. In particular, interacting with and binding to DOM elements is infinitely easier with Angular.

In a nutshell, all the functionality in Angular could be done in conventional javascript/JQuery. This is not about making the impossible possible. It is about making the possible manageable.

I’m sold – Angularise my life now!

Hold on there kid! There are some down sides…

…And at number 1 its performance. Angular is feature-rich and beautifully elegant. The trade-off is performance. To what extent this is a problem depends on the application. I need to do more research into this but the performance hit seems to be proportional to the number of bindings on the page. It comes from the apply/digest cycle checking for changes in the model each time something angular happens. My gut feeling is that for a small application that’s going to handle relatively few records at a time and whose instances consist of relatively few primitive properties the performance hit will be insignificant but for higher volumes of complex instances I imagine that performance detracting from the user experience.

The next thing is that there are a few aspects of the syntax that feel a bit “Friday afternoon” to me. The DI container is one such example. It’s critics state that minifying breaks it. It’s supporters state there are perfectly satisfactory workarounds in ngMinify and ngAnnotations. Personally, my opinion is torn. On the one hand, how else do you do DI in a loosely typed language? On the other hand having to use a “workaround” from day one doesn’t bode well. Juries out for now and I guess we’ll see what version 2 brings.

Conclusion

I love the idea. I think it is an excellent example of how to effectively harness the power on the client side. You can offer a greater guarantee of quality than ever before. You can add to it and extend it more easily than ever before. It is a huge step forward from the vast ‘soup’ of disparate functions that conventional javascript tends to yield. I would go as far as to say that it is one of – if not the – strongest contender of the many javascript libraries available. I am very curious to see what v2 offers. Remember that it is only version 1 currently though.

In its current form I’m not totally convinced its the silver bullet. I’d happily knockout a brochure site in it. It would think twice before building a trading platform in it though…

Overall, my view is one of cautious optimism. Conceptually, I like it. There are a few refinements and optimisations I would like to see in the implementation… Hurry up with version 2!

Continue Reading