Change of MVP focus to Internet Explorer

I was originally awarded MVP C# for the last 2 years due (I suspect) to my books coverage of the CLR and Language changes. I never felt completly comfortable with the C# focus as don’t consider myself a language expert and the web has always been my primary area of interest. I was thus pleased to change my MVP award focus to Internet Explorer (although I suspect Internet Explorer focus is less prestigious than C#!).

I spent some of this week at the MVP summit meeting the Internet Explorer team. It was great to speak to the team directly and understand some of the decisions they have made and a deep dive into various perf enhancements. I look forward to working with them and hopefully having an input into Internet Explorer.

The team have launched a great comp to see what you can do with HTML5 at http://www.beautyoftheweb.com/#/unplugged

IE 9 and measuring web page performance using window.performance

When optimizing web pages it is useful to measure how long various functions and events take to occur on a page so you can be sure you are appended pictures of your Cat to the DOM as quick as possible.

However it’s actually quite difficult to measure the time various functions and events take to run. Most current methods of measuring time involve getting the current time at various points on a page and then performing simple date arithmetic. However even measuring this way can of course skew the test results (although it should be fairly consistent). Additionally John Resig wrote an interesting post after he discovered some browsers only update their system times around every 15ms (http://ejohn.org/blog/accuracy-of-javascript-time/) so using this method means that you are not going to see micro changes anyway.

The W3c has proposed a standard API for measuring performance (you can read it here: http://dvcs.w3.org/hg/webperf/raw-file/tip/specs/NavigationTiming/Overview.html). This isnt actually finished yet so expect there to be a few changes.

We can play with this new api in IE9 (Chrome and the latest stable of Firefox dont seem to support this yet).

To use the new API we retrieve the  window.performance.timing object (note some tutorials such as http://blogs.msdn.com/b/ie/archive/2010/06/28/measuring-web-page-performance.aspx still refer to this as windows.msPerformance but a quick walk of the window object will show we know better..).

The below example shows the syntax:

var timingObj = window.performance.timing;
var navStartTime = new Date(timingObj.navigationStart);

Currently the documentation around some of these properties is a little scarce and its a bit confusing as to what each are actually measuring so I will follow this up as I discover more.

Azure deployment – remain in running state

I spent a frustrating week dealing with Azure deployment.

One of the most irritating things for me about Azure is that sometimes you can screw up an Azure package but this wont be revealed until you try and deploy it as it will remain in the starting state. As the time a role can take to start up varies this is doubly annoying as you dont know whether it has failed yet!

But wait – surely Azure would give you a bit of information regarding why it cannot start up your role?

Well no although this potentially changes with Azure Tools 1.3. Version 1.3 of the tools allow you to remote desktop into the role and may or may not offer additional information..

So what kind of things can cause a role to remain in this state and consider sacrificing small animals to the azure gods?

From my experience:

1) Not including required assemblies- make sure all necessary assemblies are set to copy local. Azure is not your machine and may not know about your assemblies
2) Corrupt configuration
3) Storage wrongly configured e.g. leaving your role pointing at devstorage4) Wrongly configured or missing certificates
4) The moon moving into Venus’s orbit..

When you package Azure roles by default they are encrypted (devfabric packages are not) which can make it tricky to spot missing assemblies etc. You can disable this by creating a new system environmental variable called _CSPACK_FORCE_NOENCRYPT_ and setting it to true (see http://blogs.msdn.com/b/jnak/archive/2009/04/16/digging-in-to-the-windows-azure-service-package.aspx). You can then change the .cspkg extension to zip and browse the contents of it. Note the team say this technique is unsupported so may stop working in a future version of the tools.

Good luck!

History API in HTML5

A common issue when loading content through ajax techniques is that a browsers backwards and forwards buttons sometimes wont respond how the user expects. For example if you change a pages content in response to the click of a button and the user then clicks back expecting to return to previous content then its unlikely the application will function as expected.

Html5 introduces a history api allowing you to easily manipulate the browsers history and also hold state on each entry. This is pretty well supported already across modern browsers so is worth looking into now.

Below are some examples of how to use this:

history.pushState(stateObj, “page 1”, “IWasNeverReallyLoaded.html”);
history.replaceState(stateObj, “page 1”, “page5.html”);

Introducing WebAdvisor – the web quality tool!

I have started work on an application that I am tentatively calling WebAdvisor. The idea of this application is that it will analyze HTML for bad practices based on simple string matching rules and then direct the user to documentation showing a better way of doing stuff.

For example it might pick up stuff like:

<div onclick=”javascript:alert(‘Couldnt you add me a better way?’)”></div>

This came out of research I am currently writing for a presentation on Javascript best (and worst) practices.

Javascript has an application called JSlint that will analyze javascript for issues. At first I considered how to test Javascript without a browser (and there are a couple of .net js processors to do this) but then decided that simple string matching and manipulation could catch many issues.

I plan to create several different types of test e.g. html – (heh do we need to check for use of blink or marquee!), javascript, security etc all as plugable MEF modules.

Originally I was thinking this process could be integrated into a build. However most web applications are composed of many components so I think this is going to have to analyze the html (at least initially).

Anyway have done an initial check in of project – not too much there at the moment but let me know if you think this is a good/bad idea.

 

IE9, Video and HTML5

There is an interesting post (mainly concentrating on various patent issues) on the IE blog that IE will support the H.264 video standard (note that IE9 will also support Google’s WebM format that was looking the best bet until recently through an additional install).

This got me thinking about HTML5 and Video – is this something you should use now?

Hmm well lets start at the beginning.. Html5 contains a Video tag (there is also an Audio one thats very similar) giving you the ability to embed video on a web page.

The below code shows an example of how to do this:

<video id=”Video”   height=”500″ width=”500″>
<source src=”billyBrowsers.ogg” type=’video/ogg; codecs=”theora, vorbis”‘>
</video>

Pretty easy huh? There are also a number of other attributes you can add to the Video element such as loop (guess what this does!), controls (left up to the browser to render playback controls).

This has a number of advantages:

  • No plug-in required! (although codec’s are necessary – see IE blog link above)
  • Can be indexed by search engines
  • Video is a DOM element so can be manipulated – Mozilla have a cool example of this.

However not all browsers support HTML 5 yet so whats a dev to do?

Hmm well hopefully you are designing your application using the philosophy of progressive enhancement and one way to outsource the complexity of this is to use a third party player such as SublimeVideo or Open Standard Media player that will attempt to use HTML 5 to play content and fall-back to Flash if necessary.

Azure Service Management REST API HTTP status codes

I am currently working on scripting a complex Azure deployment.

Azure provides a REST API that you can call to perform many tasks. To make this api a bit easier to work with Microsoft released the Azure cmdlets that provide a wrapper for some of this functionality.

It is worth noting that the service api doesn’t allow you to do everything you will want to do. For example a weird omission is that you cannot currently script the creation of storage services.

Anyway I digress!

Whilst creating a wrapper for some of the API calls I ran into a number of issues that I wouldnt want anyone else to waste their time with. The Azure service management API isnt the most descriptive of things when you screw up and most of the time will give you an HTTP status code to indicate what happened.

I love the purist nature of this and it does make sense but surely Microsoft could have included some more info given you have to be authenticated anyway?

Anyway if you are working with the API and having problems here are a few things to check:

401 (unauthorized) – check you haven’t exceeded service deployment allowance, if you are deleting a service slot you will also need to suspend any running roles
403 (forbidden) – check API certificate and that it is uploaded. Note the .net certificate APIs have an additional overload that allows you to only retrieve valid certificates – if you have issues the certificate yourself it wont be valid!
409 (conflict) – service names must be globally unique – check yours is!
500 – check your request is valid

I also noticed that Fiddler seems to interfere with requests and results in 403 Forbidden errors.

Deploying to Azure – The path is too long after being fully qualified. Make sure the full path is less than 260 characters and the directory name is less than 248 characters.

This week I was writing some Powershell scripts to deploy our application to Azure (blog post to follow to save others the trouble of finding out how to do this).

When I deployed this week after the team had been doing some refactoring I got the error:
The path is too long after being fully qualified.  Make sure the full path is less than 260 characters and the directory name is less than 248 characters.

Agrahhh! Ok I’ll admit I am not a big fan of our very long naming convention but even with this none of the paths actually exceed this length. The problem is due to Development Fabric using a temp directory behind the scenes. For a full explanation please see: http://blogs.msdn.com/b/jnak/archive/2010/01/14/windows-azure-path-too-long.aspx

This can be resolved (unless you have really long paths!) by creating an environmental variable called _CSRUN_STATE_DIRECTORY and setting it to a directory such as c:\a\. Azure will then use this temp directory instead that will hopefully get around this issue.

SQL CE and Sync framework – A duplicate value cannot be inserted into a unique index.

I am currently working on a project utilizing Sync Framework and SQL CE and ran into an annoying bug with identity fields that I wanted to make other developers aware of.

If you use sync framework to sync to a SQL CE database and you use identity fields then after the sync has occurred identity fields seed values will be reset to 1 after the sync has occurred.

This means should you then try and insert data into one of these tables you will receive the error “A duplicate value cannot be inserted into a unique index” as SQL CE believes the next value in the identity field is 1.

You can verify this is happening by examining the information_schema view before and after the sync:

select * from information_schema.columns where table_name = <tablename> and column_name = <columnName>

The solution?

Well I spoke to Sync framework team who said they have no plan to fix this issue (unbelievable!) so you have 2 main options:

Do an insert (this will fail) however SQL CE then seems to sort out the correct identity numbering and you are good to go

or

Alter the table and reset the seed value – note SQL CE doesn’t support DBCC CHECK IDENT command:

Foreach table in scope {

@currentId=max(RecordID) from table (Select max(RecordId) As MaxID from [test] )
Reseed current table to @currentId (ALTER TABLE [test] ALTER COLUMN RecordId IDENTITY ([@currentId], 1)
}

Ideally when you are performing synchronization you dont want to be using identity fields (bad things would happen with more than one client!) but probably GUID’s. In our project only one database can be master and we are prevented from making schema changes.