ASP.NET Authentication with Adxstudio

I’m looking into options for integrating our shiny new CRM system with our website, so we can provide all sorts of neat self-service capabilities and features. One of the applications I’m investigating is a thing called Adxstudio – now owned by Microsoft – which claims to “transform Dynamics CRM into powerful application platform with dozens of apps and starter portals.”

image

This is one of those situations where we really are dealing with ‘solved problems.’ Email campaigns. Customers updating their own contact details, potentially things like forums, helpdesk/ticketing systems – lots of things which are nice-to-have but really aren’t strategic differentiators, and so there’s a compelling argument to find an off-the-shelf solution and just plug it in. We already have a federated authentication system here at Spotlight – something we built a few years ago that provides basic identity and authentication capabilities on top of OAuth2. At the time we built it, OpenID Connect didn’t exist yet, so we’ve got a system that does basically the same thing but isn’t actually compatible with OpenID Connect – and consequently doesn’t work out-of-the-box with Adxstudio. So I’ve been poking around, trying to work out the best way to plug Adxstudio into our infrastructure so we can evaluate it as a solution.

One of the options on the table was to replace our existing authentication system with IdentityServer; another was to implement OpenID Connect support on top of our existing authentication system – both quite elegant solutions, but both of which involve quite a lot more work than is actually required for what we’re trying to do.

The core requirement here is:

  • We already have a CRM Contact record for every user of our system
  • We can look up a user’s CRM Contact GUID during authentication
  • We want to set up the Adxstudio MasterPortal demo so that our customers are seamlessly authenticated and can use Adxstudio features as though they had registered via the Adxstudio registration facility.

Now, one of the nice things about Adxstudio is that it’s built as OWIN middleware, and uses the ASP.NET Identity framework to handle authentication – so what we need to do is work out how to translate the CRM Contact GUID into an IPrincipal/IIdentity instance that we can assign to the HttpContext.Current.User property, and hope that Adxstudio then does the right thing once the HttpContext User is set correctly.

Adxstudio provides an implementation of ApplicationUserManager that’s already registered with the OWIN model, which accepts a CRM Contact GUID (as a string) and returns an instance of ApplicationUser that we can use to spin up a new ClaimsIdentity. So the simplest possible approach here is this snippet of code:

protected void Application_AuthenticateRequest(object sender, EventArgs e) {
  Guid userGuid;
  var cookie = Request.Cookies[“crm_contact_guid”];
  if (cookie != null && Guid.TryParse(cookie.Value, out crmContactGuid)) {
    var http = HttpContext.Current;
    var owin = http.GetOwinContext();
    var userManager = owin.Get<ApplicationUserManager>();
    var user = userManager.FindById(crmContactGuid.ToString());
    var identity = user.GenerateUserIdentityAsync(userManager).Result;
    HttpContext.Current.User = new RolePrincipal(identity);
  }
}

Doing this with a Contact that’s been created via the Adxstudio registration thing works just fine – but trying to do it with a ‘vanilla’ contact blows up:

imageAs you can see from that stack trace, deep down buried under several layers of Adxstudio and ASP.NET Identity code, something is trying to construct a System.Security.Claims.Claim() instance and it’s blowing up because we’re passing in a null value for something that’s not allowed to be null. Unfortunately for us, because we don’t have the source for the thing that’s actually blowing up, we can’t see what the actual parameter values are that are causing the exception… so it’s time for a bit of hunch-driven development. 🙂

I’d already noticed that when you install Adxstudio into your CRM system, it adds a bunch of custom attributes to the Contact entity in Dynamics CRM; here’s a dump of those attributes for a working contact:

Attribute Key Value
adx_changepasswordatnextlogon False
adx_identity_emailaddress1confirmed False
adx_identity_lockoutenabled True
adx_identity_logonenabled True
adx_identity_mobilephoneconfirmed False
adx_identity_passwordhash (omitted for security reasons)
adx_identity_securitystamp ca49a664-0385-4eb4-90c0-6283c9e704ea
adx_identity_twofactorenabled False
adx_identity_username ali_baba
adx_lockedout False
adx_logonenabled False
adx_profilealert False
adx_profileisanonymous False
adx_profilemodifiedon 2016-07-20 09:18:43

The ASP.NET Identity model is generally pretty flexible, but I have a hunch that the username and the security stamp are both required fields because they’re fundamental to the way authentication works. So, let’s try inserting some code into the AuthenticateRequest handler that will check these fields exist, and update them directly in CRM if they don’t:

protected void Application_AuthenticateRequest(object sender, EventArgs e) {
  Guid userGuid;
  var cookie = Request.Cookies[“crm_contact_guid”];
  if (cookie != null && Guid.TryParse(cookie.Value, out userGuid)) {
   
var http = HttpContext.Current;
    var owin = http.GetOwinContext();
    var userManager = owin.Get<ApplicationUserManager>();
    var user = userManager.FindById(userGuid.ToString());
    if (String.IsNullOrEmpty(user.UserName)
        ||
        String.IsNullOrEmpty(user.SecurityStamp)
     ) {
       // “Xrm” is the connection string name from web.config
       using (var crm = new OrganizationService(“Xrm”)) {
         var entity = crm.Retrieve(“contact”, userGuid, new ColumnSet(true));
         entity.Attributes[“adx_identity_securitystamp”] =
           Guid.NewGuid().ToString();
         entity.Attributes[“adx_identity_username”] =
           Guid.NewGuid().ToString().Substring(0, 8);
         crm.Update(entity);
      }
    }

    var identity = user.GenerateUserIdentityAsync(userManager).Result;
    HttpContext.Current.User = new RolePrincipal(identity);
  }
}

(For the sake of this demo, all we care about is making sure those values are no longer null. In reality, make sure you understand the significance of the username and security stamp fields in the identity model, and populate them with suitable values.)

OK, this now works sometimes – but only following an IISRESET. Turns out that Adxstudio is actually caching data from CRM locally, so although that new chunk of code is updating the Contact entity into a valid identity, the Adxstudio local cache doesn’t see those changes because it’s looking at an out-of-date copy of the Contact entity. So… time to configure some cache invalidation.

You can read about Adxstudio’s web notifications feature here. Adxstudio includes some code that will call a cache invalidation handler on your own site every time an entity is updated. Which works just fine IF CRM Online can see your Adxstudio portal site. And right now I’m running CRM Online as a 30-day trial and I’m running Adxstudio on localhost, and my workstation isn’t on the internet, so CRM Online can’t see it.

Time to fire up my favourite toolchain – Runscope and Ngrok. First, I’ve set up ngrok so that any requests to mytunnel.ngrok.io will be forwarded to my local machine – you’ll need a paid ngrok license to use custom tunnel names, but if you’re using the free version try this:

D:\tools\ngrok>ngrok http –host-header=adx.local 80 

image

Now, as long as that NGrok process is running, you can hit that URL – http://ef736c25.ngrok.io/ – from anywhere on the internet, and it’ll be tunneled to localhost on port 80 and have the host-header rewritten to be adx.local. This neatly solves the problem of CRM Online not being able to connect to my local Adxstudio instance.

Next, just to give us a bit of insight into what’s going on, I’m going to set up a Runscope bucket for that. Remember – we need to route requests to /cache.axd on our local Adxstudio portal instance, via ngrok, so here’s how to get the Runscope URL you’ll need:

image

So, last step – you see that big URL in the middle?  We need to tell the Adxstudio Web Notifications plugin to notify that URL every time something changes. The option is under CRM > Settings > Web Notification URLs.

Note that the Adxstudio documentation refers to a Configuration screen accessible from the Solutions > Adxstudio Portals Base. It appears this screen doesn’t exist any more – I certainly couldn’t find any trace of it in my CRM Online instance – but it also appears it isn’t necessary, because as soon as I’d created and active Web Notification URL things started happening.

So, now we have something that works – but it still fails on the first Portal request for a particular Contact, probably because the Adxstudio cache isn’t picking up those two new fields fast enough for the login to succeed. To work around this, I’ve put in a thread sleep and then an HTTP redirect, so the first time a user lands on the portal they’ll get a slight delay whilst we populate their Adxstudio attributes, and then they’ll get their personalised screen:

protected void Application_AuthenticateRequest(object sender, EventArgs e) {
  Guid userGuid;
  var cookie = Request.Cookies[“crm_contact_guid”];
  if (cookie != null && Guid.TryParse(cookie.Value, out userGuid)) {
   
var http = HttpContext.Current;
    var owin = http.GetOwinContext();
    var userManager = owin.Get<ApplicationUserManager>();
    var user = userManager.FindById(userGuid.ToString());
   if (String.IsNullOrEmpty(user.UserName)
        ||
        String.IsNullOrEmpty(user.SecurityStamp)
     ) {
       using (var crm = new OrganizationService(“Xrm”)) {
         var entity = crm.Retrieve(“contact”, userGuid, new ColumnSet(true));
         entity.Attributes[“adx_identity_securitystamp”] =
           Guid.NewGuid().ToString();
         entity.Attributes[“adx_identity_username”] =
           Guid.NewGuid().ToString().Substring(0, 8);
         crm.Update(entity);
         Thread.Sleep(TimeSpan.FromSeconds(5));
         // Redirect back to the same page, so that Adxstudio will
         // retrieve a fresh copy of the cached data.
         http.Response.Redirect(http.Request.RawUrl);

      }
    }
    var identity = user.GenerateUserIdentityAsync(userManager).Result;
    HttpContext.Current.User = new RolePrincipal(identity);
  }
}

And it works. The final step for me was to spin up a separate web app that lists all the Contacts in the CRM system, with a login handler that puts the CRM Contact GUID into a cookie and redirects the browser to http://adx.local/ – and it works. No registration, no login, and any user with a valid CRM Contact GUID can now log directly into the Adxstudio MasterPortal example.

ASP.NET Authentication with Adxstudio

Scrum Values

The Scrum guide has had a makeover. Well it has had a small but powerful addition – values.

Scrum values have been around for a while. They are now officially part of the Scrum Guide following on from an overwhelming demand to add these.

Here is a quick run through of what these are:

Commitment – People personally commit to achieving the goals of the Scrum Team

Courage – The Scrum Team members have courage to do the right thing and work on tough problems

Focus – Everyone focuses on the work of the Sprint and the goals of the Scrum Team

Openness – The Scrum Team and its stakeholders agree to be open about all the work and the challenges with performing the work

Respect – Scrum Team members respect each other to be capable, independent people

Screen Shot 2016-07-21 at 09.53.44

This is a good opportunity to reflect on how we are doing with these at Spotlight. Please take a couple of minutes to fill in a quick form here.
To read the whole scrum guide, please click here.

Scrum Values

Affordances, Signifiers, and Cartographobia

One of the teams here is putting the finishing touches on a new online version of Spotlight Contacts, our venerable and much-loved industry guide that started life as a printed handbook way back in 1947. Along the way, we’ve learned some very interesting things about data, and how people perceive that their data is being used.

image

One of the features of the new online version is that every listing includes a location map – a little embedded Google Map showing the business’ location. When we rolled this feature out as part of a recent beta, we got some very unhappy advertisers asking us to please remove the map from their listing immediately. Now, most of these were freelancers who work from home – so you can understand their concerns. But what’s really interesting is that in most cases, they were quite happy for their full street address to stay on the page – it was just the map that they were worried about.

Of course, this immediately resulted in a quite a lot of “what? they want to keep the address and remove the map? ha ha! that’s daft!” from developers – who, as you well know, are prone to occasional outbursts of apoplectic indignation when they have to let go of their abstractions and engage with reality for any length of time – but when you think about it, it actually makes quite a lot of sense.

See, street addresses are used for lots of things. They’re used on contracts and invoices, they’re used to post letters and deliver packages. Yes, you can also use somebody’s address to go and pay them a visit, but there are many, many reasons why you might need to know somebody’s address that have nothing to do with you turning up on their doorstep. In UX parlance, we’d say that the address affords all of these interactions – the presence of a street address enables us to post a letter, write a contract or plan a trip.

A map, on the other hand, only affords one kind of interaction; it tells you how to actually visit somewhere. But because of this, a map is also a signifier. It sends a message saying “come and visit us” – because if you weren’t actually planning to visit us, why would you need to know that Spotlight’s office at 7 Leicester Place is actually in between the cinema and the church, down one of the little alleys that run between Leicester Square and Chinatown? For posting a letter or writing a contract, you don’t care – the street address is enough. But by including a map, you’re sending a message that says “hey – stop round next time you’re in the neighbourhood”, and it’s easy to see why that’s not really something you want if you’re a freelancer working from your home.

It’s important to consider this distinction between affordances and signifiers when you’re designing your user interactions. Don’t just think about what your system can do – think about all the subtle and not-so-subtle messages that your UI is sending.

Here’s the classic Far Side cartoon “Midvale School for the Gifted”, which provides us with some great examples of affordances and signifiers. The fact you can pull the door is an affordance. The sign saying PULL is a signifier – but the handle is both. Looking at it gives you a clue “hey – I could probably pull that!” – and when you do, voila, the door swings open. If you’ve ever found a door where you have to grasp the handle and push,, then you’ve found a false affordance – a handle that’s sat there saying ‘pull me…’ and when you do, nothing happens. And, in software as in the Far Side, there’s going to be times when all the affordances and signifiers in the world are no match for your users’ astonishing capacity to ignore them all and persist in doing it wrong.

(Far Side © Gary Larson)

Affordances, Signifiers, and Cartographobia

ASP.NET Core 1.0 High Performance

Former Spotlighter James Singleton – who worked on our web team for several years and built some of our most popular applications, including our video/voice upload and playback platform – has just published his first book, ASP.NET Core 1.0 High Performance. Since the book includes one or two things that James learnt during his time here at Spotlight, he was gracious enough to invite me to contribute the foreword – and since the whole point of a foreword is to tell you all why the book is worth buying, I figured I’d just post the whole thing. Read the foreword, then read the book (or better still, buy it then read it.)

TL;DR: it’s a really good book aimed at .NET developers who want to improve application performance, it’s out now, and you can buy your copy direct from packtpub.com.

And that foreword in full, in case you’re not convinced:

“The most amazing achievement of the computer software industry is its continuing cancellation of the steady and staggering gains made by the computer hardware industry.”
– Henry Petroski

We live in the age of distributed systems. Computers have shrunk from room-sized industrial mainframes to embedded devices smaller than a thumbnail. However, at the same time, the software applications that we build, maintain and use every day have grown beyond measure. We create distributed applications that run on clusters of virtual machines scattered all over the world, and billions of people rely on these systems, such as email, chat, social networks, productivity applications and banking, every day. We’re online 24 hours a day, 7 days a week,  and we’re hooked on instant gratification. A generation ago we’d happily wait until after the weekend for a cheque to clear, or allow 28 days for delivery; today, we expect instant feedback, and why shouldn’t we? The modern web is real-time, immediate, on-demand, built on packets of data flashing round the world at the speed of light, and when it isn’t, we notice. We’ve all had that sinking feeling… you know, when you’ve just put your credit card number into a page to buy some expensive concert tickets, and the site takes just a little too long to respond. Performance and responsiveness are a fundamental part of delivering great user experience in the distributed age. However, for a working developer trying to ship your next feature on time, performance is often one of the most challenging requirements. How do you find the bottlenecks in your application performance? How do you measure the impact of those problems? How do you analyse them, design and test solutions and workarounds, and monitor them in production so you can be confident they won’t happen again?

This book has the answers. Inside, James Singleton presents a pragmatic, in-depth and balanced discussion of modern performance optimization techniques, and how to apply them to your .NET and web applications. Starting from the premise that we should treat performance as a core feature of our systems, James shows how you can use profiling tools like Glimpse, MiniProfiler, Fiddler and Wireshark to track down the bottlenecks and bugs that are causing your performance problems. He addresses the scientific principles behind effective performance tuning – monitoring, instrumentation, and the importance of using accurate and repeatable measurements when you’re making changes to a running system to try and improve performance.

The book goes on to discuss almost every aspect of modern application development – database tuning, hardware optimisations, compression algorithms, network protocols, object-relational mappers. For each topic, James describes the symptoms of common performance problems, identifies the underlying causes of those symptoms, and then describes the patterns and tools you can use to measure and fix those underlying causes in your own applications. There’s in-depth discussion of high-performance software patterns like asynchronous methods and message queues, accompanied by real-world examples showing how to implement these patterns in the latest versions of the .NET framework. Finally, James shows how you can not only load test your applications as part of your release pipeline, but can continuously monitor and measure your systems in production, letting you find and fix potential problems long before they start upsetting your end users.

When I worked with James here at Spotlight, he consistently demonstrated a remarkable breadth of knowledge, from ASP.NET to Arduinos, from Resharper to resistors. One day he’d be building reactive front-end interfaces in ASP.NET and JavaScript, the next he’d be creating build monitors by wiring microcontrollers into Star Wars toys, or working out how to connect the bathroom door lock to the intranet so that our bicycling employees could see from their desks when the office shower was free. Since James moved on from Spotlight, I’ve been following his work with Cleanweb and Computing 4 Kids Education. He’s one of those rare developers who really understands the social and environmental implications of technology – that whether it’s delivering great user interactions or just saving electricity, improving your systems’ performance is a great way to delight your users. With this book, James has distilled years of hands-on lessons and experience into a truly excellent all-round reference for .NET developers who want to understand how to build responsive, scalable applications. It’s a great resource for new developers who want to develop a holistic understanding of application performance, but the coverage of cutting-edge techniques and patterns means it’s also ideal for more experienced developers who want to make sure they’re not getting left behind. Buy it, read it, share it with your team, and let’s make the web a better place.

Check it out. The chapter on caching & message queueing is particularly good 🙂

ASP.NET Core 1.0 High Performance

Quality of Life with Git and Pivotal

At Spotlight we are currently using Pivotal Tracker as our planning tool and git for version control. One of the obvious things to do is to connect code changes to stories by including the story id in the branch and commit messages. This allows you to cross reference between the two and track why you did what code changes.

But developers are lazy beasts, the pivotal numbers are long, and life is too short to write [#123456789] a gazillion times a day. Let’s see how we can improve it!

Automated branch creation

If you follow the standard git workflow, the first thing to do when starting some work is to create a branch. The branch should have a story number, as well as a meaningful title. Luckily, pivotal has a very nice REST api we can use directly from curl to free us from burden of typing the branch name:

#!/bin/sh
TOKEN='pivotal-token-goes-here'
PROJECT_ID='your-project-id-goes-here'
STORY_ID=`grep -o '[0-9]*' <<< \$1} `
NAME=`curl -X GET -H "X-TrackerToken: $TOKEN" "https://www.pivotaltracker.com/services/v5/projects/$PROJECT_ID/stories/$STORY_ID" | grep -o '"name":"[^"]*"'| head -1 | sed "s/'//" | sed s/'"name":"'// | sed s/'"'//g | sed s/' '/'_'/g | sed s/'#'//g | sed s~/~_~g | sed s/,//g`
branchName=${STORY_ID}_${NAME}

git checkout -b $branchName
git push origin -u $branchName

Yes, my bash is horrendous, so if your eyes are melting from reading it here is what it does: you pass the pivotal story id, the script curls the story as json, extracts the name and replaces characters that would upset git. Then it creates a branch and pushes it to the origin upstream repo.

Stick it into a file (eg. grab.sh), put this in your PATH and set chmod 755. For added automation, set it as your git alias and hey presto! Guaranteed to work 99% of the time.

hnrioma

This gets the job done, the only downside being that the branch names sometimes end up being too long. But that’s just an added incentive to keep the story titles short.

Add [#story id] if you are committing to a story branch

We can (hopefully) assume that people will not start their branch with a number, so we can try to filter them on commit:

#!/bin/sh
BranchName=`git rev-parse --abbrev-ref HEAD`
TicketNo=`grep -o '^[0-9]*' <<< $BranchName`
if [ -n "$TicketNo" ] 
 then 
 git commit -m "[#$TicketNo] $1"
 else
 git commit -m "$1"
fi

Now you can just type your commit message, and git will add the story number if you are on a story branch. That saves us 11 keystrokes per commit! How cool is that?

aa0g9cb

And just for completion, the git aliases I am using that go into .gitconfig

[alias]
 cm = !commit.sh
 grab = !grab.sh

Enjoy one line git-pivotal experience!

Quality of Life with Git and Pivotal

Retroing the Retros

After recently reading the book Scrum Mastery by Geoff Watts I was reminded of something very important that I learnt after going through the motions of sprint start, end, retro over and over again; people get bored of the same old same old. It is important to keep things engaging with fresh ideas.

When I joined Spotlight over a year ago, everything was new and exciting. While that was the case for me, I had forgotten that everyone else had been doing the same for a long time. So, I decided to bring back a little bit of play into work. Here are a couple of my personal favourites.

The Walking Dead inspired zombie Retro. The team (us) represented by the poor man on the left prepared to fight the zombie attack. Here we added green stickies representing tools needed by us to face the attack. In other words everything that is helping us succeed.

The zombies on the right are representing everything that is standing in the way of our success. Here we added pink stickies with everything that is in the way of our success.

The Middle part covered by orange stickies represents measures taken by us to prepare for the zombie attack. 

Retro - Zombies

Balloon Flight Retro. This is a much simpler one that focussed on:

  • What is stopping us
  • What is helping us

balloon

More ideas are always welcomed.

Although I have had a burst of interesting sessions recently, the real challenge is in keeping it going. Watch out for some Legospectives in the near future!

image.png

Retroing the Retros

Exupérianism: Improving Things by Removing Things

Last night this popped up on Twitter:

Last year, as part of migrating our main web stack to AWS, we created a set of conventions for things like connection strings and API endpoint addresses across our various environments, and then updated all of our legacy systems to use these conventions instead of fragile per-environment configuration. This meant deleting quite a lot of code, and reviewing pull requests with a lot more red lines than green lines in them – I once reviewed a PR which removed 600 lines of code across fifteen different files, and added nothing. No new files, no new lines – no green at all – and yet that change made one of our most complicated applications completely environment-agnostic. It was absolutely delightful.

When I saw John’s tweet, what instantly came to mind was a quote from Antoine de Saint-Exupéry:

“Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.”

So how about we adopt the term “Exupérian” for any change which improves something by making it smaller or simpler? The commit that removes 600 lines of unnecessary configuration. The copy-edit that turns fifteen thousand words of unstructured waffle into ten thousand words of focused, elegant writing. Maybe even that one weekend you spent going through all the clutter in your garage and finally getting rid of your unwanted lamps and old VHS tapes.

Saint-Exupéry was talking about designing aircraft, but I think the principle is equally applicable to software, to writing, to music, to architecture – in fact, to just about any creative process. I was submitting papers to a couple of conferences last week, and discovered that Øredev has a 1,000-character limit for session descriptions. Turns out my session descriptions all end up around 2,000-3,000 characters, and editing those down to 1,000 characters is really hard. But – it made them better. You look at every single word, you think ‘does it still work if I remove this?’, and it’s surprising how often the answer is ‘yes’.

Go on, give it a try. Do something #exuperian today. Edit that email before you send it. Remove those two classes that you’re sure aren’t used any more but you’re scared to delete in case they break something. Throw out the dead batteries and expired coupons from your desk drawer. Remove a pointless feature nobody really wants.

Maybe you even have an EU cookie banner you can get rid of? 🙂

Exupérianism: Improving Things by Removing Things