Wednesday, November 27, 2013

Programming a million core machine

I have just attended an excellent talk by Steve Furber, Professor of Computer Engineering at University of Manchester on the challenges on programming a million core machine as part of the SpiNNaker project. 

The SpiNNaker project has been in existence for around 15 years and has been attempting to answer two fundamental questions:
  • How does the brain do what it does? Can massively parallel computing accelerate our understanding of the brain?
  • How can our (increasing) understanding of the brain help us create more efficient, parallel and fault-tolerant computation?
The comparison of a parallel computer with a brain is not accidental since brains share many of the required attributes being massively parallel, have lots of interconnections, provide excellent power-efficiency, require low speed communications, is adaptable/fault-tolerant (of failure) and capable of learning autonomously. The challenges for computing as Moore's law progresses is that there will eventually come a time when further increases in speed will not be possible and as processing speed has increased, energy efficiency has become an increasingly important characteristic to address. The future is therefore parallel but the approach to handling this is far from clear. The SpiNNaker project has been established to attempt to model a brain (around 1% of a human brain) using approximately 1 million mobile phone chips with efficient asynchronous interconnections whilst also examining the approach to developing efficient parallel applications.

The project is built on 3 core principles:
  • The topology is virtualised and is as generic as possible. The physical and logical connectivity are decoupled.
  • There is no global synchronisation between the processing elements.
  • Energy frugality such that that cost of a processor is zero (removing the need for load balancing) and the energy usage of each processor is minimised.
[As an aside, energy efficient computing is a growing interest such that when a program is constructed, how much energy is required to complete the computation is now the key factor in many systems (in terms of operational cost)]

The SpiNNaker project has designed a node which contains two chips; one chip is used for processing and consists of 18 ARM processors (1 hosts the operating system, 16 are used for application execution and 1 is spare) and the other chip is for memory (SDRAM). The nodes are connected in a 2D-mesh due to simplicity and cost. 48 nodes are assembled onto a PCB such that 864 processors are available per board. The processor only supports integer computation. The major innovation in the design is the interconnectivity within a node and between nodes on a board, A simple packet switched network is used to send very small packets around; each node has a router which is used to efficiently send the packets either within the node or to a neighbouring node. Ultimately, 24 PCBs are housed within a single 19” rack which are then housed (5) within a cabinet such that each cabinet has 120 PCBs which equates to 5760 nodes or 103680 processors. 10 cabinets would therefore result in over 1 million processors and would require around 10KW. A host machine (running Linux) is connected via Ethernet to the cabinet (and optionally each board).

Networking (and is efficiency) is the key challenge to emulate neurons. The approach by Spinnaker is to capture a simple spike (representing a neuron communication) within a small packet (40 bits) and then multicast this data around (each neuron is allocated a unique identifier, there is a theoretical limit of 4 billion neurons which can be modelled). By the use of a 3-stage associative memory holding some simple routing information, the destination of each event can be determined. If the table does not contain an entry, the packet is simply passed through to the next router. This approach is ideally suited to a static network or a (very) slowly changing network. It struck me that this simple approach could be very useful in efficient communication across the internet and maybe useful for meeting the challenge of the 'Internet of Things'.

Developing applications for SpiNNaker requires that the problem is split into two parts; one part handles the connectivity graph between nodes; the other part handles the conventional computing cycle with compile/link and deploy. Whilst the performance in terms of throughput is impressive (250 Gbps for 1024 links), it is the throughput which is exceptional at over 10 Billion packets/second.

The programming approach is to use an event-driven programming paradigm which discourages single-threaded execution. Each node runs a single application with the applications (written in C) communicating via an API to SARK (the SpiNNaker Application Runtime Kernel) which is hosted on the processor. The event model effectively maps to interrupt handlers on the processor with 3 key events handled by each application:
  • A new packet (highest priority)
  • A (DMA) memory transfer
  • A timer event (typically 1 millisecond)
As most applications for Spinnaker have been to model the brain, most of the applications have been written in PyNN (a python neural network) which is then translated into code which can be hosted by SpiNNaker. The efficiency of the interconnections mean that brain simulations can now be executed in real-time, a significant improvement over conventional supercomputing.

In concluding, it is clear that whilst the focus has been on addressing the 'science' challenges, the are clearly insights into future computing in terms of improved inter-processor connectivity, improved energy utilization and a flexible platform. Whilst commercial exploitation has not been a major driving force for this project, I am confident that some of the approaches and ideas will find a way into main-stream computing in much the same way that 50 years ago, Manchester developed the paging algorithm which is now commonplace in all computing platforms. 

The slides are available here.

Sunday, April 7, 2013

A Byte of PI

A national science week activity to introduce primary pupils to programming

In common with many of my colleagues, I bought a Raspberry PI last summer to find out what all the fuss was about. What could a £25 computer really do? I was most impressed as it was a real computer running a real operating system which could do the same things that computers costing 10 times or more could do. I had these crazy ideas for projects which could use a Raspberry PI but somehow these never came to anything and the PI was left in a drawer, which appears to be what has happened to many of the PIs after that initial burst of enthusiasm. Which is a real shame because the Raspberry PI could just be the catalyst to get our children programming again. I often hark back to the heady days of the early 1980's when with the advent of the BBC Micro, ZX Spectrum and other machines connected to the family television, there was lots of interest which resulted in a huge increase in children (and adults) learning to program and creating some great games and applications. I am sure these first steps into computing were the stimulus for many to consider a career in computing and may be just the reason we had the Dot-COM boom of the late 1990's.

Then something magical occurred. MOSI in Manchester organised an event for STEM ambassadors working in the computing industry in mid-January. I have been a STEM Ambassador for around 18 months and have always felt that there weren't that many ambassadors with a strong computing background. However at this event, there were over 30 like-minded enthusiastic ambassadors who all felt that the Raspberry PI was special and was ideal to kick start programming again in schools and help move the curriculum forward from ICT (using computers) and on to computing (making computers do something useful).

And then it struck me – if we have our own PIs, could we pool these together to create an event which could go round schools to try and stimulate interest in programming? Clearly one PI per school wasn't going to be enough, we needed lots of PIs so that we could immerse lots of pupils at the same time. And so a Byte of PI was born. I met up with another STEM ambassador who I had done an event with 12 months ago for Year 5 pupils. Although we were both computing professionals, our previous STEM activity for Year 5 was more general science (these ambassadors have such versatility!) and we were initially going to repeat the activity this year. However, we both agreed that it was worth a go at trying to create a Raspberry PI themed science event.

But where do we start? It was clear that the activity needed to be something which was engaging, stimulating and fun but it also needed to make sure that it had a clear goal such that at the end of the session our Year 5 pupils could say that 'I have learnt something today – I can program a computer'. The initial thoughts was to have 10 PIs with 2 pupils per PI each trying to create a different program probably using Scratch, a brilliant visual programming language. We also thought about having some control experiments e.g. simple traffic light controller. As the weeks progressed (we had around 6 weeks to make the event a reality), it dawned on me that I would have to first write a program on the PI, then de-construct it in such a way that a clear set of instructions could be created suitable for a Year 5 audience and then tested.

Several iterations later, I had a program and a set of instructions ready for testing. My youngest son, William, was keen to help. He had some experience of Scratch which had been useful as he had discovered some new features for me to use. Testing demonstrated that the quiz would take between 30 and 40 minutes to complete which was ideal for a 1 hour session. I also shared with my fellow STEM ambassadors so that they were comfortable helping at the session.

The Byte of PI team ready for Action...
It was 'launched' at a Computing at Schools NW conference in March where it was explained how Scratch could be used as a catalyst for getting more pupils into programming

So what is a Byte of PI? A byte of PI session is a 1 hour session consisting of 4 parts
  • A brief introduction to introduce what a program is and why we need programs (remember computers are stupid; they need people to tell them precisely what to do!)
  • A hands-on session developing an application (a maths quiz). There are a number extensions available if the initial application is completed quickly
  • An example demonstration of what can be achieved with Scratch including animation, multiple sprites and sound. (We used a recent homework assignment from Thomas, my eldest son for this)
  • A brief round-up reviewing what was learnt and a video promoting computer science and why it is cool to code.
It was clearly a success and hopefully the event can be repeated elsewhere.

So what have I learnt:
  • It is always great fun working with primary children (but I think most STEM Ambassadors would say that!)
  • The IT infrastructure in many schools isn't readily PI compatible but industry can help this by loaning out DVI or HDMI monitors.
  • Scratch is great language to get children developing applications quickly but contains many of the features which 'professional' languages contain. This means the basics in good programming techniques can be learnt before moving onto more advanced languages.
  • An event like this needs lots of hands-on support, not necessarily from ambassadors with a computing background, because there will be lots of questions.
The next steps:
  • Develop more Byte of PI exercises, introducing more concepts and possibly with different languages. The aim is to demonstrate that everyone can programme and then get engaged with activities such as CodeClub and STEMclubs.
  • Develop a 'Slice of PI' which would be aimed at teachers with a view to trying to provide some more background behind the activities and in particular giving them the confidence to teach it within their schools.
Special thanks must go the Donna Johnson and Daniel O’Donnell at MOSI for helping us (particularly for providing extra PIs and monitor cables) and ensuring our enthusiasm never waned; Karen Crowther at AGSB for providing us with the excuse to run the event (and to have so much fun!), NMI for providing us with 6 PIs provided that we did something with them to get children coding (I think we have achieved that), my fellow STEM Ambassadors Anthony, Lisa, Sam and Amin, and finally my two sons, Thomas (aged 12) and William (aged 10) for testing my program, teaching me some of the finer details of Scratch and being excellent tutors during the workshop.

Tuesday, February 19, 2013

Technologists are good for business

This evening's BCS/IET Turing Lecture given by Suranga Chandratillake, founder and former chief strategy officer of Blinkx, at Manchester University was an interesting talk linking the technical excellence of an engineer with the needs of an entrepreneur. His premise was that his undergraduate course in Computer Science at Cambridge University had provided him with many of the skills he needed to have a successful business career - it was just that he wasn't aware he had the skills.

Suranga first compared the stages that an inventor and entrepreneur went through with the evolution of an idea. The inventor would go from a position where he felt he wanted to challenge the world to the point where he had a flash of inspiration  and onto the stage where the invention was now tangible  Compare this to an entrepreneur who starts by thinking 'I need money' (because ideas are not enough)  to the stage where the product or service is now a salable item up to the point where he is now making a profit. The UK is very good at educating and nurturing  many great technologists to create and innovate; unfortunately it is not always good at exploiting these ideas mainly because many of the skills to allow a entrepreneur to exploit technical ideas are not well developed.

He described how he was offered the opportunity to be the founder CEO of Blinkx, a startup spun out from Autonomy. He was reluctant (very!) at taking on this role because he felt that he didn't have the necessary skills to fulfill the role as he was essentially a technologist. He struck a deal with Mike Lynch, CEO of Autonomy, that said that if he needed help with some of the business functions such as finance, HR, Sales and marketing that Mike would help him out. What amazed me was that the skills he needed for finance, marketing and sales were all taught on his undergraduate course, it was just that they weren't expressed in business manner. For example, for marketing to determine the most effective approach to use (e.g. PR, web-page banner ads or search adverts), requires the application of some simple probabilistic modelling, a 2nd year course. I felt he stretched the analogy a bit far when he compare a HR organisation to that of a system architecture; however, I think many aspects of HR (particularly recruitment) can be covered in parts of undergraduate courses particularly with the increasing amounts of team-working forming part of the curriculum.

Suringa summarised that the attributes of a technologist of being qualitative, rigorous and analytical had actually prepared him perfectly for business in a technical organisation. He stated that it is a fallacy that technologists do not understand business, it is just that they assume that they don't have the skills. This is a mental block rather than a a lack of ability.

I found the talk provided much food for thought. Clearly the business environment that Suranga operated in was not typical of many companies but it was illuminating to see he was able to relate back to his undergraduate course. The opportunity to work in a small company with a unique technology (as blinkx)  is clearly not going to be available to everyone. However, provided the opportunities are available I am sure many more technologists should feel empowered to exploit technology to create viable and thriving businesses.

The BCS/IET Turing Lecture 2013: The IET Turing Lecture 2013: What they didn't teach me: building a technology company and taking it to market
Suranga Chandratillake
The IET Prestige Lecture Series 2013, Turing Lecture, Savoy Place, London, 18 February 2013

Thursday, February 16, 2012

Employing OSS as part of successful product

I recently attended a webinar given by Oliance Group on the success factors for use of OSS. Whilst the webinar concentrated on one particular organisations' experience, the following are my 10 points of note which should apply to any product based organisation using OSS.

  1. Innovation with OSS traditionally comes from the community or vendors (where commerical success can be gained). However there is increasing innovation emerging from customers, partners, academics when OSS is used as there is an increase in collaboration as the wider benefits of OSS are now being recognised.
  2. OSS is increasingly being used in non-differentiating aspects of products. A good example is the GENIVI alliance which provides an open source in-vehicle infotainment toolkit where there is non-differentiation in manufacturers products.
  3. Most product organisations have recognised that it is futile to prevent OSS being used within its products and have now focused on how to best harness the benefits and opportunities that OSS offers. It is essential that the consequences of redistribution is understood when OSS is included as a component within your product - this requires a good understanding of the licences with a preference to use components released under one the permissive licences (Apache 2.0, MIT, BSD) over the copyleft (GPL family) licences. The use of some OSS components has helped its customers, partners etc in adopting and developing new products.
  4. Use of OSS components/products should  often result in enhancing and fixing bugs as well as before contributing back into the community. Internal developments may benefit from being released into the community if it is a non-discriminatory development as it will re-energise the component with a better product emerging.
  5. Use of OSS must not diminish customers needs for relaible, quality and secure products. Whilst OSS may offer benefits in terms of reduced development timescales, not all OSS is good and careful selection is required before it can become a key component within a product. The selection process is key to future success and in addition to assessing the licence requirements and component maturity, there needs to be an assessment of the functional fit to ensure that the OSS component is compatible with the overall product architecture and its intended use (e.g. embedded, linked, modified).
  6. Governance policies are required with regards the use of and the contribution to OSS components. All stakeholders must fully understand the approach (there are too many misconceptions about use of OSS by senior management that this needs to be carefully managed). Product owners and architects must be educated and informed about all OSS usage. A knowledge base of approved porducts (and versions) should be actively maintained to avoid a proliferation of different versions (of the same product) and similar products (providing equivalent functionality).
  7. Synchronising product release cycles with that of OSS products can be problematic particularly if the OSS components are released frequently to address bug fixes/security fixes. It is recommended that product release plans are aligned with OSS components (knowledge of the roadmap for each OSS component is therefore essential for this). As products often have long term support requirements, there also needs to be some guarantee that the OSS components are compatible with the same support requirements.
  8. Product standards may need to be harmonized across multiple components, particularly if a UI is involved. Also a consistent approach to security should be adopted (e.g. particularly if SSO is used/required) across both OSS and internal developments. OSS components must be actively monitored for vulnerabilities and fixes applied appropriately.
  9. Support for each OSS component is important, particularly when long term support is considered. Options include do nothing, but this is only of the OSS component is very mature and stable; develop skills internally; establish a maintenance activity by active engagement within the OSS community; or employ a 3rd party support service
  10. Use of an OSS component within a commerical product must ensure that the organisations's intellectual property is protected and that market discriminators remain.

Sunday, February 5, 2012

Windows 8 - Keep taking the tablets

So should I try the Windows 8 beta when it become available next month? That was the question following a very honest presentation given by Mike Halsey about the forthcoming Windows 8 operating system to the Manchester branch of the BCS. After having a few days to think about it, I can't see many reasons for upgrading from my current Windows 7 setup (and various Linux distributions running as virtual machines and an Android smartphone). This is very different to when Windows 7 came out, as it was a significant improvement from Windows Vista - I used the Beta version on my main machine until it was officially released when I then upgraded all of my machines to Windows 7.

Whilst there are probably many significant improvements behind the scenes, including support for new and emerging technologies such as USB3 and Thunderbolt, Windows 8's main evolution appears to be with the user interface, and the introduction of the new Metro interface. Mike had a pre-beta version of Windows 8 loaded onto a tablet and having a quick play on the tablet (start up time from cold was most impressive), the user interface was very similar to both Android and IoS. However, one significant change was that the application icons (or tiles as Microsoft would call them) could be different sizes and could also be live (so that they could show the weather or a stock price) without having to launch the application. This was first launched on the Windows Phone but I still think that this was a neat idea (although there are now similar products available for both Android and IoS).

2012 is clearly going to be interesting year for operating systems for tablets with Google's Ice-Cream sandwich (aka Android 4), Apples's latest IoS and Microsoft's entry in tablet space (Windows 8). There is clearly room for all of them but it is now very clear that operating systems that cater for  tablets must also work seamlessly with smartphones and other devices. With Microsoft being the last to release a tablet based OS, they are clearly playing catch-up and their success will clearly depend on the quality and takeup of the Apps in the recently announced Windows StoreWindows 8 is blatantly aimed at the consumer market, and is clearly trying to be a common platform across a variety of different platform types (desktops/laptops, smartphones, tablets and games consoles). This is is a bold strategy which no one has yet mastered. It is also promoting connected 'experiences' (with the cloud being a key part of the strategy) and clearly expects a 'touch' interface to become increasingly the primary form of interaction. Having said that, I understand it is still possible to get to the good old DOS window so that traditional user interface (command line) can still be experienced.

Will Windows 8 be a success? I don't know, but I think Windows 9 (scheduled for late 2015) might be the better bet as it will have the benefit of seeing how the integrated desktop/smartphone/tablet/games console world works. Microsoft are clearly betting on trying to develop a platform which can be common across a range of platform types, a laudable aim which will certainly deliver benefits in terms of product management (assuming it works!). However, I can't see any attraction for large corporates, many of which have yet to migrate from Windows XP. A big problem is in the application space in which applications developed specifically for Windows 8 cannot be run on Windows 7 (or predecessors). It was not clear to me if existing Windows 7 (or earlier) applications could run on Windows 8; if not this will be a huge mistake unless Windows 8 apps becomes priced at typical app prices (i.e. free or typically less than £1) rather than the several £100's that Microsoft applications typically cost.

Will I download the Windows 8 beta when it is available later this month? Maybe but only out of curiosity and it will be running on some old equipment as it don't see it as replacement OS for my primary machine.

Sunday, January 1, 2012

TDD for embedded development

I recently attended an interesting talk on test driven development (TDD)
for embedded development given by Zulke UK for the  BCS SPA specialist group.

Robotshop Rover
The talk was the result of some experiments using an Arduino board, a bluetooth interface and an android tablet. The embedded platform was a Robotshop Rover with a number of sensors and controllers. The sensors were used to guide the robot along a track indicated by a solid black line; the motors were used to control the direction of the robot and also its speed.

Although the standard arduino development environment isn't a full IDE and is limited in its functionality, does come with some good code
examples. An alternative environment is the Eclipse CDT with the AVR plugin added to handle the download of the arduino image to the target platform. To support the development using TDD, CPPUTEST was used as the test framework. CPPUTEST is recommended as a framework suitable for embedded development (see James Greening book on TDD for Embedded Development) and appeared to the presenters to be more effective than CPPUNIT. It was noted by members of the audience that there are few tools which have good integration with the continuous integration platforms such as Jenkins.

An overview of TDD was given, and its application for an embedded target environment

1/ TDD needs to be able to test on both the development and target environments. This requires that two projects are created, one with arduino as the target environment and an other targeted at an x86 environment

2/ The cycle of test->code->refactor needs to be followed with the tests being chosen from a backlog. Code shouldn't be written unless it is to satisfy an existing test.

3/ The cycle should be choose test->write test->run-test->fail! If it doesn't fail (and it could be due to compiler or link failure), then it normally indicates that the test has been incorrectly written.

4/ Limited design is required although normal good software engineering practice should still be adopted (no monolithic functions etc).

5/ Mock interfaces should be used to unit test sensor interfaces. This allows the logic to be tested and debugged first before loading on to the target.

6/ If a state machine is required, some design is essential before any test cases can be identified and written.

The exercise in TDD with the arduino target resulted in no logic errors once the code was exercised on the target. However, the behaviour of the robot, in particular the speed of the motors required some further development and enhancement to the codebase. Given the fact that arduino boards are typically focused on the school market, I was surprised that C++ was the chosen development language. However, the C++ used on the arduino is a cut down version which removes many of the complexities of the full C++ language.

Whilst the associated Android development (on a Motorola XOOM tablet) appeared to be successful in terms of developing a simple user interface to send commends via Bluetooth to control the robot (e.g. stop, start,...), the development of the application revealed some shortcomings. Although the Android development kit works very well with Eclipse (the code is a mixture of Java and xml) and allows on target debugging, the use of TDD is less appropriate given the extensive use of callbacks (e.g.onclick, ....). The testing of GUI applications cannot be adequately tested within a development environment; fortunately the android debugger is excellent at pinpointing issues (typically null pointer exceptions). Android emulators can help to a limited extent but are not a sufficient replacement to an actual device. Android development for tablets is just evolving as the platform moves from a phone, with relatively simple applications, to potentially far more complex applications. The launch of Android 4.0 development kit (aka Ice-Cream sandwich) will clearly accelerate the development of more complex applications which will necessitate the employment of sound software engineering principles in delivering quality products.

In summary, the session demonstrated successfully that TDD could be applied in an embedded environment and that through the use of appropriate open source tools, software development for the expanding android market can follow tried and tested techniques.

Thursday, November 17, 2011

Open Source Software - the legal implications

I attended the recent BCS Manchester/Society for Computers and Law event on the use of Open Source Software and the legal implications. It was given by Dai Davis, an information technology lawyer who is also a chartered engineer, a very unusual combination but he clearly knew his material.

Dai started by explaining the basic aspects of copyright law and the implications that this
had on software. It was clear that some of the original purposes of copyright law (to prevent
copying) were applicable to software but that the period that copyright lasts (70 years after
the authors death) clearly made little sense for software. However a number of points caught my eye:
  • Copyright protects the manifestation and not the subject matter. This means that look and feel is not normally subject to copyright although fonts are.
  • Copyright infringement also includes translating the material. Translation in the software case includes compiling source code as well as rewriting the source code into another language.
  • Copyright protects copying some or all of the material. The amount does not normally matter.
  • Moral rights do not extend to software but do apply to documentation
  • Copyright infringement is both a civil and criminal offence with a maximum of 10 years imprisonment and an unlimited fine.
Dai then explained that the first owner (or creator) of the material owns the copyright. Misunderstanding this is the major cause of disputes about copyright. Clearly there are exceptions if the material is created in the course of employment (copyright rests with the employer) or if the contract under which the material is being created 'assigns' the copyright to the purchaser.

All software licences grants the purchaser permission to use the software otherwise the purchaser would be in breach of the copyright. Licences can be restrictive e.g. by time, number of concurrent users and all licences are transferable according to EU law.

Copyright of Open Source Software is no different to normal copyright of software but the approach to licencing is very different:
  • Nearly all OSS do not require payment to acquire
  • Free relates to restrictions on use (non-OSS can place restrictions)
  • Open access to source and usage is required (not normally available with non-OSS)
However, the licences are very difficult to enforce mainly because there has been no loss in terms of monetary value. There has never been a successful prosecution in the UK although there are a number of examples in Germany (where litigation is cheaper than the UK) and an example in the US (Jacobson v Katzer in 2009) where a 'token' settlement of $100000 was awarded.

Whilst there may be little prospect of getting sued for use of Open Source Software the biggest issue often comes when businesses are sold and OSS is found within a product - this often affects the eventual purchase price of the company. Many businesses don't know where Open Source Software is being used and included within its own products because it is very difficult to police and manage.

A video of the session was filmed by students from Manchester Metropolitan University the resulting video being made available via the BCS Manchester website.