Friday, June 20, 2008

The Relentless March of the MicroChip






I have just returned from the inaugural Kilburn Lecture which concluded a day celebrating 60 years since the first stored program computer (the Manchester 'baby'). The excellent lecture given by Professor Steve Furber gave a historical perspective of the major innovations which have originated from Manchester University over the last 60 years as the technology used in computing has developed together with a personal view of the developments which he has been personally involved.

Over the last 60 years, the technology has changed from the vacuum tubes used in the Manchester 'baby' and the Ferranti Mk1, the first commercial computer, through the transistor, which although invented in Bell labs in 1947 wasn't adopted for computers until the 1960's when the Atlas computer was developed, the fastest computer at the time, to the integrated circuit used in MU5 (a forerunner to the ICL 2900 series), Dataflow and Amulet systems. The Atlas computer also introduced the concept of the single level store, which is more commonly referred as virtual memory which one of the attendees at the lecture, Professor Dai Edwards, remarked that he still received payments for this invention. Each decade has seen the level of complexity increase as the number of transistors has increased which shows no sign of slowing down.

Steve also provided a personal perspective of his involvement in microprocessor design starting with the BBC Micro of 1982 whilst at Acorn. While the BBC Micro was primarily built from off the shelf components including the 6502 microprocessor, Steve designed two simple bespoke chips which helped reduce the chip count by 40. The success of the BBC Micro and its design led to Acorn developing further bespoke chips resulting in the Acorn Risc Machine (ARM) being released in 1985. This is probably one of the most significant microprocessor developments over the last 25 years as derivatives of this processor have now become a significant component in mobile phone technology. The ARM machine had a simple design, was small and had low power consumption and was really a System on a Chip (SoC). By 2008, there have been over 10 billion ARM processors delivered, making it the most numerous processor in the world. When Steve moved to Manchester University in 1990, he continued to use ARM technology in the AMULET system. Over various generations of AMULET, the size of the chips didn't change but the chips became more complex as the transistor spacing reduced from 1 micron in 1994, to 0.18 microns in 2003, which is smaller than the wavelength of visible light.

There was an interesting comparison of the changing energy requirements over the last 60 years. in 1948, the Manchester 'baby' required 3.5 KW of power in order to execute 700 instructions per second which represents 5 Joules/instruction. Contrast this with the ARM968 processor on 2008, which requires 20 mW of power and can execute 200 million instructions per second which would represents 0.0000000001 Joules/instruction. This represents a 50,000,000,000 times improvement! - there are few examples of such a dramatic improvement in energy efficiency. Steve did give a warning though as he stated that more efficient computing can lead to increasing power requirements.

No lecture on the development of processors can ignore Gordon Moore's seminal article (colloquially known as Moore's Law), and this lecture was no different. The original paper published in 1965 predicted the exponential increase in the number of transistors per chip only until 1975. However, this prediction is still true today and has become a self-professing policy and has become a key input into planning next generation microprocessor designs (see International Technology Roadmap for Semiconductors). The reduction has been achieved by shrinkage in transistor size, cheaper components and reduced power consumption. Steve cited that the current generation of microSD cards, which provide a flash memory of 12 GB contain over 50 million transistors in the size of a fingernail, demonstrates how much progress has been made since the original use of transistors in computers in such systems as Atlas. However, exponential progress cannot go on indefinitely and there are now some physical limits which will constrain progress. As components have increased in complexity, their reliability and lifetime (in many cases less than 12 months for some items) has reduced as the tolerances on such a great number of components are very fine. There is also a recognition that the cost of design to achieve advances in technology is becoming uneconomic as it is increasing at 37% per year. Steve considers that Moore's Law may survive for another 5-10 years with current technology but there is will need to be advances in alternative technology for it to continue beyond this (and there is no signs of this happening at the moment).

The current generation of microprocessors, dual core/multi-core, have tried to address the increasing constraints by selling more processors per microprocessor rather than a single faster processor. Moors' Law can also apply to multi-core systems however there is an increasing problem with the use of such processors by application software. As single processors have increased in performance, there has been little change to the way software has been developed. However, with the advance of multi-core technology, general purpose parallelism is required in order to maximise the available processing capacity. This is one of the 'holy grails' of computing and it is becoming an increasingly important problem to solve. The use of such multi-core processors also needs to be carefully considered. It would appear that is preferable to have lots of cheap (and simple) cores rather than a small number of faster (and complex) cores due to the significant power efficiency differences.

Steve finished by giving us a glimpse into the future as hew showed how microprocessor design was converging with biology. The human brain has many attributes which are similar to the requirements of a complex network of computers e.g. tolerant to component failure (e.g. loss of a neuron), adaptive, massively parallel, good connectivity and power efficient. Steve's current project SpiNNaker is trying to build a system which can perform a real-time simulation of biological transactions mapped onto a computer architecture. The project is using 1000's of ARM processors and is trying to meet one of the UKCRC's Grand Challenges in which the architecture of the mind and brain are modelled by computer.

Friday, June 6, 2008

Assessing code quality

How do you assess a software modules's quality? It is a question I have been struggling with for some time as I try to perform a peer review of a large code base.

Over time, a software module evolves from its intended form to something less than beautiful as bugs are discovered (and fixed) and enhancements over the original requirements are implemented. This is particularly true for code which is developed on a multi-person project, where personnel can change and often a module gets changed by different engineers. Although I adhere to the rule, that the code structure should reflect the original author's style (and how many people change the comment at the top of the file to identify that they have been one of the author's? This assumes that this information isn't automatically added by the configuration management system.), it can become increasingly difficult to make changes.

So what is the best way to assess code quality throughout it's development?