Stack Overflow Driven Development

The rise of Stack Overflow has certainly changed how many programmers go about their trade.

I have recently been learning some new client side web skills because I need them for a new project. I have noticed that the way I go about learning is quite different from the way I used to learn pre-web.

I used to have a standard technique. I’d go through back issues of magazines I’d bought (I used to have hundreds of back issues) and read any articles related to the new technology. Then I’d purchase a book about the topic, read it and start a simple starter project. Whilst doing the starter project, I’d likely pick up a couple of extra books and skim them to find techniques I needed for the project. This method worked pretty well, I’d be working idiomatically, without a manual in anywhere from a month to three months.

Using the old method, if I got stuck on something, I’d have to figure it out on my own. I remember it took three days to get a simple window to display when I was learning Windows programming in 1991. Without the internet, there was nobody you could ask when you got stuck. If you didn’t own the reference materials you needed, then you were stuck.

Fast forward twenty years and things are rather different. For starters, I don’t have a bunch of magazines sitting around. I don’t even read tech magazines any more, either in print or digitally. None of my favourite magazines survived the transition to digital.

Now when I want to learn a new tech, I head to Wikipedia first to get a basic idea. Then I start trawling google for simple tutorials. I then read one of the new generation of short introductory books on my Kindle.

I then start my project safe in the knowledge that google will always be there. And, of course, google returns an awful lot of Stack Overflow pages. Whilst I would have felt very uncomfortable starting a project without a full grasp of a technology twenty years ago, now I think it would be odd not to. The main purpose of the initial reading is to get a basic understanding of the technology and, most importantly, the vocabulary. You can’t search properly if you don’t know what to search for.

Using my new approach, I’ve cut my learning time from one to three months down to one to three weeks.

The main downside to my approach is that, at the beginning at least, I may not write idiomatic code. But, whilst that is a problem, software is very maleable and you can always re-write parts later on if the project is a success. The biggest challenge now seems to be getting to the point when you know a project has legs as quickly as possible. Fully understanding a tech before starting a project, just delays the start and I doubt you’ll get that time back later in increased productivity.

Of course, by far the quickest approach is to use a tech stack you already know. Unfortunately, in my case that wasn’t possible because I don’t know a suitable client side tech. It is a testament to the designers of Angular.js, SignalR and NancyFX that I have found it pretty easy to get started. I wish everything was so well designed.


Capturing loopback traffic without a loopback interface

Wireshark is a wonderful tool, no doubt about it. But, on Microsoft Windows, there is one thing it isn’t so good at.

Microsoft decided to remove the local loopback interface in Windows 7. So capturing loopback traffic is rather difficult without modifying your system. Something I try to avoid if at all possible.

There are ways to install the loopback interface if you want, as documented here. There are also other means to achieve the same effect, also documented in the previous link.

Unless you need to do a lot of capturing, the chances are you’re going to want an easier, quicker way.

Happily, somebody has thought of that. Just download the RawCap utility kindly provided for free by NETRESEC and you don’t need to configure anything. You don’t even need to install any software, or even unzip. Just download and copy into your utils folder.

Here’s how to run RawCap:

RawCap.exe 1 dumpfile.pcap

The 1 is the identifier for the loopback interface and dumpfile.pcap is the output file. If you’re not sure, just run RawCap.exe and you’ll be prompted.

The output file is in PCAP format, so it’s a snap to load into Wireshark for later analysis.


Where does a failure manifest itself first

A network monitoring tool periodically makes a request to a system end point and records the result in a database of some kind.

Whether the polling interval is every few seconds, one minute or ten minutes or longer there is an awful lot of time when the network monitor has nothing meaningful to say about the state of the end point.

The network monitor is unlikely to be the first system to spot a problem. If the network monitor won’t be the first to spot a problem, what will?

In our systems, the first place where a problem will manifest itself is in a log of some kind. Be that a text based log or something like Windows event log.

If your website returns s 5XX status code, then your log file will record the fault long before your network monitor will make a request that returns a 5XX code.

What time difference am I talking here?

Depends a lot on how often you poll the end point. If you are polling every minute or faster then the difference is likely to be pretty insignificant. If you are polling every five minutes then the difference could potentially be significant.

But it isn’t just that you will be informed more quickly by going to the source of the failure, you will also get better information.

Monitoring an end point will only tell you so much: whether  it is working  and the response time and maybe, if you’re lucky, a response code.

In the case of our logs when a 5XX status code is returned, we’ll probably get the full exception message plus stack trace. Altogether a lot more useful.

tl;dr monitor your primary sources, don’t rely on secondary sources.


The Last of the Savages

Ray Kurzweil has a history of making accurate future forecasts. One of them is that the 3D printer is coming and the current ones are but a small hint of what is to come.

That got me thinking. We are quite possibly the last generation to have a direct connection between taking raw materials and making an end product.

Imagine your far distant relatives ordering a steak from their Acme Wondermatic 5000 3D printer.

The steak itself would be made of animal protein but would not have been grown on an animal. I’m not saying that it is morally wrong to slaughter an animal for meat, although I do understand why some people do. What I am saying is that, people who are completely divorced from a world of producing things the messy way, may think the way we do it today is pretty savage.

A stone age man looking at a modern super market would ‘get’ it. He had the inconvenience of actually hunting and gathering, and we don’t. But the food itself, and where it came from, is at least understandably the same.

If the cave man wanted meat, he hunted an animal and slaughtered it. If I want a steak, somewhere down the supply chain, an animal is slaughtered.

With the advent of the 3D printer, the connection between the means of production and the end product is about to be broken.

I can’t say that upsets me at all.


Oodles of disk space, just not in the right place

Over the last few months we’ve been having some email troubles. For a few months the emails would start to back up with our backup email provider and then almost immediately begin to flow normally again. More recently the periods when email was backing up come around faster and lasted longer.

Intermittent problems are a nightmare to diagnose. Was the problem our broadband, the router, our network, the email server or, the old favourite, DNS?

I thought wrongly that it was the DNS, because, well, it almost always is. Turns out the problem was a good deal simpler than that.

The email server has a largish disk. Over a terabyte split into a system partition, drive C: and a data partition, drive D:.

With Microsoft Exchange, if the available disk space on the drive where the hub transport queue is stored gets below 4GB then Exchange throttles incoming mail traffic.

Unfortunately, the default location for Microsoft Exchange to place the hub transport queue is on the same partition it is installed to. And, if you are using Exchange bundled with Small Business Server (SBS) that will be the system partition.

Fixing the problem was very simple, just tell Exchange to place the queue on the data partition, restart the transport service and everything works fine.

As usual, fixing the problem is trivial when the exact cause of the problem has been identified.

The first take away it is that I really should have been concentrating on the Exchange logs a lot sooner rather than assuming it would be a connectivity problem. The logs would have told me that the email was reaching the server and that the server was rejecting them. That would have ruled out connectivity as a root cause.

The second is my thankfulness for having secondary domain MX records. Back-up email servers cost us around $25 per year and just saved us from dropping a single email. Thank you Cloudfloor DNS.

No emails were harmed in the making of this blog post.


Open source, open conflict?

I am currently messing around in the pits of .NET e-commerce. I thought it would be the last place I’d find open source inspired disharmony. But no, even here it is to be found. ;)

OK, a bit of background.

NOP Commerce is an ecommerce platform based on Microsoft’s open source ASP.NET platform. The project has been around for five or six years or so. Gets very good reviews too. Last year SmartStore.NET forked NOP. Nothing wrong with that, NOP is GPL’ed. That would be fine except for a clause in NOPs license which states that you must keep a link in your website footer to the project website unless you pay a small $60 waiver fee.

The problem, and the tension, comes from SmartStore.NET having removed the link requirement from their fork.

Whatever the legalities involved, and I am not legally qualified to comment either way, the SmartStore.NET fork doesn’t feel right. The NOP guys have put a ton of work into the project and they deserve better.

The sad thing is that there is a lot in SmartStore.NET to like. Wouldn’t a better option have been to merge the changes into NOP Commerce so that everybody wins?

Update: if you are after a .NET based e-commerce system then Virto Commerce is worth a look. Looks to be maturing qickly.


Software delivery with even less friction

I’ve talked before about the joys of continuous software delivery before.

Well, I’ve been building a couple of micro sites recently and thought it would be nice to try out a few new technologies and techniques.

Firstly, I’ve built them with HTML5 and the Twitter Bootstrap framework, there’s a very good tutorial here. Bootstrap provides a combination of CSS and Javascript to make building a clean, responsive site without having to worry about cross browser compatibility.

We use TeamCity and Octopus Deploy for another much larger site. For micro sites I think that Octopus is overkill. A large complicated deployment workflow isn’t necessary for a simple micro site. Octopus really comes into its own when you need to deploy to a large web farm, or you need a workflow prior to releasing the website into production.

I thought I’d give Appharbor a go. Appharbor is a newish form of web hosting. They host your website as you’d expect, but they’ve also integrated source code control and source code building into their offering.

All you need to do is create an ASP.Net website project, initialize your git repository then push it to Appharbor. Your website is then built, any tests are executed and then, if you’re green, the website is deployed.

Appharbor is focused on the .NET space. Windows Azure offers a similar, though more expensive, option. Other languages have similar services available like heroku and brightbox.

One major lesson learned. Your tests need to give you a high degree of confidence that your site will work in production. Your tests are your only gate keeper stopping you from deploying a broken site. So, if your tests are green, the site had better work.

I’ve aimed for 100% code coverage with the tests and have managed to achieve 94.41%. The other 5 odd percent is the code calling the API of a third party website. Tough to test code talking to externals, so I manually test that code. Not ideal but I can live with it.

The micro sites in question are a C# Weekly launching early next year and the same for F# Weekly due a little bit later.

The visuals on the sites are a little bit minimal, but everything does work.


Social animals

I volunteered for a rabbit sanctuary a few weeks ago. I stumbed onto Cample Nibble’s website and saw the advert for volunteers to help with packing groceries in a supermarket.

Looking back on it, my abiding memory is the social difficulty a lot of people had when dealing with a charity bag packer.

Is it really so difficult just to say no?

I packed groceries for a couple of hours and I noticed that each person’s decision whether to accept or decline assistance was often dictated by the decision of the person before them.

You’d get a queue of people at the checkout and if the first person agreed, then the rest of the queue would agree. Same with an initial decline.

So you’d get very busy periods and very quiet periods all dictated by a single person agreeing or declining.

P.S. If you want to donate to this very worthy charity, please visit their donation page here.


Everything has metrics, even this blog

I was struck by something that Tim Nash said in the August WordPress Leeds meeting. He did an interesting talk about blog metrics. One of those metrics was the number of subscribers.

This blog has around the 600 – 880 subscribers depending upon when you happen to log into Feedburner. Things have become a lot less stable since the demise of Google Reader.

One of the things that Tim said is that, if your blog is not enlisting new subscribers, it is probably not hitting the sweet spot with visitors.

New Email Subscribers by Year

Email subscribers to The Tech Teapot broken down by year. The first post to the blog was in November 2006.
Year
New Subscribers
20132
20122
20112
201010
200921
2008254
2007252
20060

That’s a grand total of 543 email subscribers.

I’ve not blogged much in the last couple of years so I can’t say I’m surprised nobody has subscribed. This blog has been the online equivalent of an abandoned wild west town.

Probably hasn’t helped that the blog boom, such as it was, has long since passed. A lot of the conversations that were happening on blogs have now moved over to Twitter.

Still a place for the blog though. Kinda hard talking about sys admin in 140 characters. :)

P.S. One of the more humbling things about the email subscribers is that a lot of them have subscribed for getting on for 7 years. I bow down to your fortitude. ;)


Top 5 Open Source Event Correlation Tools

Networks create lots of events. Sometimes thousands per minute.

Events can be SNMP traps generated by a server rebooting, syslog messages, Microsoft Windows event logs etc.

How do you know which events are important? The ones telling you something important?

That is where event correlation tools come in handy. You feed all of the events into the tool, as well as a description of the structure of your systems, and its job is to flag up the important ones.

  1. Simple Event Correlator (SEC) – SEC is a lightweight, platform independent event correlation tool written in Perl. Project registered with Sourceforge on 14th Dec 2001.
  2. RiverMuse – correlate events, alerts and alarms from multiple sources into a single pain of glass. Open core with a closed enterprise product cousin.
  3. Drools – a suite of tools written in Java including Drools Guvnor – a business rules manager, Drools Expert – rule engine, jBPM 5 – process / workflow, Drools Fusion – event processing / temporal reasoning and OptaPlanner – automated planning.
  4. OpenNMS – whilst not a dedicated event correlation tool, OpenNMS does contain an event correlation engine based upon the Drools engine mentioned above.
  5. Esper (and Nesper) – Esper is a Java based components (Nesper is a .NET based version of Esper) for complex event processing.

If you want a survey of event correlation techniques and tools, you could do a lot worse than read Andreas Müller’s master’s thesis titled Event Correlation Engine. It is a few years old, but is still pretty current.