Programming in 1985
I read this morning that Bill Gates once said that no-one would ever need more than 640KB of memory in their computer. In later years he denied saying this.
It took me straight back to my second ever programming job – working on a computer aided design (CAD) program that used a ground-breaking graphical interface. It used a new-fangled mouse to move a “cursor” on the screen which could be used to create engineering and architectural drawings.
I started the job programming on an Apple IIe computer but the company was in the process of moving over to the new IBM PCs and their clones. I learned how to program in C in this new environment by sitting next to Bill for a month or 2 and watching what he did. Bill was a great guy and once invited my wife and me over for dinner where we had spaghetti bolognese with sprouts.
After I’d learned C they didn’t trust me to work on the main CAD program initially so I worked on the setup program and an external program for translating the main program into different languages. This seemed to go okay so I was allowed to start working on the main program.
Our program was large and the 512KB of memory that the PCs and clones came with had to be upgraded to 640KB by adding an extra 128KB. This cost around £100 at the time which is about £300 in today’s money.
The sequence for writing or changing the code started with loading the code into Wordstar, which was a text editing program usually used for simple word processing. It was monochrome and had no mouse, so selecting blocks of text involved remembering sequences of control keys like Ctrl K + B, move the cursor with the arrows to the end of the block then Ctrl K + K. You could then copy the block with Ctrl K + C, move the cursor again and then paste with Ctrl K + V.
Unlike all of today’s programming environments there was absolutely no feedback in Wordstar about whether you had any errors in the code. A missing semi-colon or quotation mark was undetected until it was time to compile your code. This made programming a very intense process with a high level of concentration required as the smallest mistake would cost time.
Windows hadn’t really been invented so it was not possible to run more than one application at a time. Once the programming addition or bug fix was done you would save the program (Ctrl K X to save and exit since you ask) and then run the compiler which I think was called Wizard. This was on the command line so you would run “Wizard amazingcad” (this isn’t really what it was called) and off it would go. It would chunder away for between 5 and 10 minutes. If you had missed that semi-colon or quotation mark it would tell you at some point and you would be back into Wordstar and round the loop again.
Recommended by LinkedIn
If everything compiled correctly then you would run a linker program. This joined your code up with various libraries that were used to create a whole executable program. The linking program was called Plink. Linking didn’t take as long as compilation so within a few minutes you would have an executable which you could run to check whether your change had worked.
We also had absolutely no form of debugging. If you needed to find out what was going on at stages in your code you had to put printf statements in to print information to a file. This really slowed everything down as, to add additional information to your printf, you would have to go around the whole compilation and linking cycle again.
Space in the program was very tight. All the text was kept to minimum length and kept in a separate part of the program so that it could be used in several places if possible. This also allowed it to be translated into other languages fairly easily.
Data that was common to the whole program, like the text, was held in the heap. Data that was passed between functions used the stack. Both of these areas of memory had quite small limits and it wasn’t hard to blow the stack by passing too much information into functions or nesting too deeply. There were no objects, only simple data types like integers and fixed length strings and structures which contained a few simple data types in a lump. We would pass pointers to these between functions as passing the actual data would quickly blow the stack. To keep on/off settings small they would be held in the individual bits of an integer and read back with AND logic.
The code took up most of the available space so we had to develop our own virtual memory system for the CAD drawing data. This would pull in data that wasn’t used very often from the 10MB hard disk. It meant that before accessing any drawing data you had to remember to call a function called checkvm() and then to get the data somewhere useful you would call copymem(). If you missed a checkvm() then random data would sometimes get copied into the drawing with spectacular results. Eventually this got so bad that Dave, the chief programmer, had the brilliant idea of making copymem() call checkvm() before it did anything else.
As the program grew, we also started to use a technique called overlays which was provided by the Plink linker. This was a virtual memory system for code that allowed parts of it be offloaded to disk and brought back in to memory as required. As part of the Plink configuration you would define which functions should go in which overlay and then it was pulled in automatically when you called the code. If you got your loop wrong and called code in one overlay from another the run time code would crawl along with the disk light flashing on and off on each iteration.
We also started using expanded memory which was a system where memory above 640k could be used in swappable 16KB blocks. Again, you had to be really careful not to swap them too often as there was a time overhead on each swap.
Running a fully featured program with a graphical interface in 640KB of memory was a struggle. The computer I’m running today has 16GB of memory – around 25,000 times what I had to work with back then.
Funny you still remember the key sequences!