Software the old fashioned way

I was clearing out my old bedroom after many years nagging by my parents when I came across my two of my old floppy disk boxes. Contained within is a small snapshot of my personal computing starting from 1990 through until late 1992. Everything before and after those dates doesn’t survive I’m afraid.

The archive contains loads of backups of work I produced, now stored on Github, as well as public domain / shareware software, magazine cover disks and commercial software I purchased. Yes, people used to actually buy software. With real money. A PC game back in the late 1980s cost around £50 in 1980s money. According to this historic inflation calculator, that would be £117 now. Pretty close to a week’s salary for me at the time.

One of my better discoveries from the late 1980s was public domain and shareware software libraries. Back then there were a number of libraries, usually advertised in the small ads at the back of computer magazines.

This is a run down of how you’d use your typical software library:

  1. Find an advert from a suitable library and write them a nice little letter requesting they send you a catalog. Include payment as necessary;
  2. Wait for a week or two;
  3. Receive a small, photocopied catalog with lists of floppies and a brief description of the contents;
  4. Send the order form back to the library with payment, usually by cheque;
  5. Wait for another week or two;
  6. Receive  a small padded envelope through the post with my selection of floppies;
  7. Explore and enjoy!

If you received your order in two weeks you were doing well. After the first order, when you have your catalog to hand, you could get your order in around a week. A week was pretty quick for pretty well anything back then.

The libraries were run as small home businesses. They were the perfect second income. Everything was done by mail, all you had to do was send catalogs when requested and process orders.

One of the really nice things about shareware libraries was that you never really knew what you were going to get. Whilst you’d have an idea of what was on the disk from the description in the catalog, they’d be a lot of programs that were not described. Getting a new delivery was like a mini MS-DOS based text adventure, discovering all of the neat things on the disks.

The libraries contained lots of different things, mostly shareware applications of every kind you can think of. The most interesting to me as an aspiring programmer was the array of public domain software. Public domain software was distributed with the source code. There is no better learning tool when programming than reading other peoples’ code. The best code I’ve ever read was the CLIPS sources for a forward chaining expert system shell written by NASA.

Happy days :)

PS All of the floppies I’ve tried so far still work :) Not bad after 23 years.

PPS I found a letter from October 1990 ordering ten disks from the library.

Letter ordering disks

 


Early 1990s Software Development Tools for Microsoft Windows

The early 1990s were an interesting time for software developers. Many of the tools that are taken for granted today made their debut for a mass market audience.

I don’t mean that the tools were not available previously. Both Smalltalk  and LISP sported what would today be considered modern development environments all the way back in the 1970s, but hardware requirements put the tools well beyond the means of regular joe programmers. Not too many people had workstations at home or in the office for that matter.

I spent the early 1990s giving most of my money to software development tool companies of one flavour or another.

Actor was a combination of object oriented language and programming environment for very early versions of Microsoft Windows. There is a review in Info World magazine of Actor version 3 that makes interesting reading. It was somewhat similar to Smalltalk, but rather more practical for building distributable programs. Unlike Smalltalk, it was not cross platform but on the plus side, programs did look like native Windows programs. It was very much ahead of its time in terms of both the language and the programming environment and ran on pretty modest hardware.

I gave Borland quite a lot of money too. I bought Turbo Pascal for Windows when it was released, having bought regular old Turbo Pascal v6 for DOS a year or so earlier. The floppy disks don’t have a version number on so I have no idea which version it is. Turbo Pascal for Windows eventually morphed in Delphi.

I bought Microsoft C version 6 introducing as it did a DOS based IDE, it was still very much an old school C compiler. If you wanted to create Windows software you needed to buy the Microsoft Windows SDK at considerable extra cost.

Asymetrix Toolbook was marketed in the early 1990s as a generic Microsoft Windows development tool. There are old Info World reviews here and here. Asymetrix later moved the product to be a learning authorship tool. I rather liked the tool, though it didn’t really have the performance and flexibility I was looking for. Distributing your finished work was also not a strong point.

Microsoft Quick C for Windows version 1.0 was released in late 1991. Quick C bundled a C compiler with the Windows SDK so that you could build 16 bit Windows software. It also sported an integrated C text editor, resource editor  and debugger.

The first version of Visual Basic was released in 1991. I am not sure why I didn’t buy it, I imagine there was some programming language snobbery on my part. I know there are plenty of programmers of a certain age who go all glassy eyed at the mere thought of BASIC, but I’m not one of them. Visual Basic also had an integrated editor and debugger.

Both Quick C and Visual Basic are the immediate predecesors of the Visual Studio product of today.


New Aviosys IP Power 9820 Box Opening

A series of box opening photos of the newly released Aviosys IP Power 9820 8 port rack-mountable power switch which arrived in the office this morning. This new model replaces the older IP Power Switch 9258-PRO model.

The new model is higher powered, supports Wi-Fi, live charts display the value of WH (Watt per hour), current amp , voltage & temperature and the LCD display shows temperature, voltage, IP address and current for each port.


I’ve a feeling we’re not in Kansas any more

I was researching a follow up to how will cloud computing change network management post and I came across something rather odd I’d like to share with you before I’ve done the follow up.

Below are a series of graphs culled from Google Trends showing the relative search levels of various network management related keywords.

What is the most significant feature of them? What struck me is the downward decline with various degrees of steepness. The searches don’t just represent commercial network management tools, there are open source projects and open core products there too. I even put searches for network management protocols like SNMP and NetFlow in too. They all show declines.

netcool

NetCool Search Trend

netflow

NetFlow Search Trend

opennms

OpenNMS Search Trend

openview

OpenView Search Trend

sflow

SFlow Search Trend

syslog

Syslog Search Trend

zenoss

Zenoss Search Trend

ipfix

IPFIX Search Trend

mrtg

MRTG Search Trend

nagios

Nagios Search Trend

snmp

SNMP Search Trend

The only search not showing a decline is Icinga. But, that may just be because it is a relatively recent project so it doesn’t have a history of higher volumes of searches it probably would have had if it were a bit older.

icinga

Icinga Search Trend


Stack Overflow Driven Development

The rise of Stack Overflow has certainly changed how many programmers go about their trade.

I have recently been learning some new client side web skills because I need them for a new project. I have noticed that the way I go about learning is quite different from the way I used to learn pre-web.

I used to have a standard technique. I’d go through back issues of magazines I’d bought (I used to have hundreds of back issues) and read any articles related to the new technology. Then I’d purchase a book about the topic, read it and start a simple starter project. Whilst doing the starter project, I’d likely pick up a couple of extra books and skim them to find techniques I needed for the project. This method worked pretty well, I’d be working idiomatically, without a manual in anywhere from a month to three months.

Using the old method, if I got stuck on something, I’d have to figure it out on my own. I remember it took three days to get a simple window to display when I was learning Windows programming in 1991. Without the internet, there was nobody you could ask when you got stuck. If you didn’t own the reference materials you needed, then you were stuck.

Fast forward twenty years and things are rather different. For starters, I don’t have a bunch of magazines sitting around. I don’t even read tech magazines any more, either in print or digitally. None of my favourite magazines survived the transition to digital.

Now when I want to learn a new tech, I head to Wikipedia first to get a basic idea. Then I start trawling google for simple tutorials. I then read one of the new generation of short introductory books on my Kindle.

I then start my project safe in the knowledge that google will always be there. And, of course, google returns an awful lot of Stack Overflow pages. Whilst I would have felt very uncomfortable starting a project without a full grasp of a technology twenty years ago, now I think it would be odd not to. The main purpose of the initial reading is to get a basic understanding of the technology and, most importantly, the vocabulary. You can’t search properly if you don’t know what to search for.

Using my new approach, I’ve cut my learning time from one to three months down to one to three weeks.

The main downside to my approach is that, at the beginning at least, I may not write idiomatic code. But, whilst that is a problem, software is very maleable and you can always re-write parts later on if the project is a success. The biggest challenge now seems to be getting to the point when you know a project has legs as quickly as possible. Fully understanding a tech before starting a project, just delays the start and I doubt you’ll get that time back later in increased productivity.

Of course, by far the quickest approach is to use a tech stack you already know. Unfortunately, in my case that wasn’t possible because I don’t know a suitable client side tech. It is a testament to the designers of Angular.js, SignalR and NancyFX that I have found it pretty easy to get started. I wish everything was so well designed.


Capturing loopback traffic without a loopback interface

Wireshark is a wonderful tool, no doubt about it. But, on Microsoft Windows, there is one thing it isn’t so good at.

Microsoft decided to remove the local loopback interface in Windows 7. So capturing loopback traffic is rather difficult without modifying your system. Something I try to avoid if at all possible.

There are ways to install the loopback interface if you want, as documented here. There are also other means to achieve the same effect, also documented in the previous link.

Unless you need to do a lot of capturing, the chances are you’re going to want an easier, quicker way.

Happily, somebody has thought of that. Just download the RawCap utility kindly provided for free by NETRESEC and you don’t need to configure anything. You don’t even need to install any software, or even unzip. Just download and copy into your utils folder.

Here’s how to run RawCap:

RawCap.exe 1 dumpfile.pcap

The 1 is the identifier for the loopback interface and dumpfile.pcap is the output file. If you’re not sure, just run RawCap.exe and you’ll be prompted.

The output file is in PCAP format, so it’s a snap to load into Wireshark for later analysis.


Where does a failure manifest itself first

A network monitoring tool periodically makes a request to a system end point and records the result in a database of some kind.

Whether the polling interval is every few seconds, one minute or ten minutes or longer there is an awful lot of time when the network monitor has nothing meaningful to say about the state of the end point.

The network monitor is unlikely to be the first system to spot a problem. If the network monitor won’t be the first to spot a problem, what will?

In our systems, the first place where a problem will manifest itself is in a log of some kind. Be that a text based log or something like Windows event log.

If your website returns s 5XX status code, then your log file will record the fault long before your network monitor will make a request that returns a 5XX code.

What time difference am I talking here?

Depends a lot on how often you poll the end point. If you are polling every minute or faster then the difference is likely to be pretty insignificant. If you are polling every five minutes then the difference could potentially be significant.

But it isn’t just that you will be informed more quickly by going to the source of the failure, you will also get better information.

Monitoring an end point will only tell you so much: whether  it is working  and the response time and maybe, if you’re lucky, a response code.

In the case of our logs when a 5XX status code is returned, we’ll probably get the full exception message plus stack trace. Altogether a lot more useful.

tl;dr monitor your primary sources, don’t rely on secondary sources.


The Last of the Savages

Ray Kurzweil has a history of making accurate future forecasts. One of them is that the 3D printer is coming and the current ones are but a small hint of what is to come.

That got me thinking. We are quite possibly the last generation to have a direct connection between taking raw materials and making an end product.

Imagine your far distant relatives ordering a steak from their Acme Wondermatic 5000 3D printer.

The steak itself would be made of animal protein but would not have been grown on an animal. I’m not saying that it is morally wrong to slaughter an animal for meat, although I do understand why some people do. What I am saying is that, people who are completely divorced from a world of producing things the messy way, may think the way we do it today is pretty savage.

A stone age man looking at a modern super market would ‘get’ it. He had the inconvenience of actually hunting and gathering, and we don’t. But the food itself, and where it came from, is at least understandably the same.

If the cave man wanted meat, he hunted an animal and slaughtered it. If I want a steak, somewhere down the supply chain, an animal is slaughtered.

With the advent of the 3D printer, the connection between the means of production and the end product is about to be broken.

I can’t say that upsets me at all.


Oodles of disk space, just not in the right place

Over the last few months we’ve been having some email troubles. For a few months the emails would start to back up with our backup email provider and then almost immediately begin to flow normally again. More recently the periods when email was backing up come around faster and lasted longer.

Intermittent problems are a nightmare to diagnose. Was the problem our broadband, the router, our network, the email server or, the old favourite, DNS?

I thought wrongly that it was the DNS, because, well, it almost always is. Turns out the problem was a good deal simpler than that.

The email server has a largish disk. Over a terabyte split into a system partition, drive C: and a data partition, drive D:.

With Microsoft Exchange, if the available disk space on the drive where the hub transport queue is stored gets below 4GB then Exchange throttles incoming mail traffic.

Unfortunately, the default location for Microsoft Exchange to place the hub transport queue is on the same partition it is installed to. And, if you are using Exchange bundled with Small Business Server (SBS) that will be the system partition.

Fixing the problem was very simple, just tell Exchange to place the queue on the data partition, restart the transport service and everything works fine.

As usual, fixing the problem is trivial when the exact cause of the problem has been identified.

The first take away it is that I really should have been concentrating on the Exchange logs a lot sooner rather than assuming it would be a connectivity problem. The logs would have told me that the email was reaching the server and that the server was rejecting them. That would have ruled out connectivity as a root cause.

The second is my thankfulness for having secondary domain MX records. Back-up email servers cost us around $25 per year and just saved us from dropping a single email. Thank you Cloudfloor DNS.

No emails were harmed in the making of this blog post.