Showing posts with label quality. Show all posts
Showing posts with label quality. Show all posts

Tuesday, July 20, 2010

Lessons in measurement and data analysis

Recently I attended a very interesting and entertaining lecture by Peter Comer, from Abellio Solutions, to the BCS Quality Management Specialist Interest Group (NW) on lessons learnt in measurement and data analysis following a recent quality audit of an organisation’s quality management system (QMS).

The talk started by highlighting the requirements for measurement in ISO9001 (section 8). Key aspects that were highlighted included

  • Measure process within QMS to show conformity with and effectiveness of QMS
  • Monitoring and measurement of processes, products and customer satisfaction with QMS
  • Handle and control defects with products and services
  • Analyse data to determine suitability and effectiveness of QMS
  • Continual improvements through corrective and preventative actions

It was noted that everyone has a KPI (Key Performance Indicator) to measure the effectiveness of products and services although every organisation will use the KPIs slightly differently.

Peter outlined the context of the audit, which was an internal audit in preparation for a forthcoming external audit. The audit was for a medium sized organisation with small software group working in transport domain. A number of minor non-conformances which were relatively straightforward to correct. However, after the audit an interesting discussion ensued regarding the software development process which stated that they were finding more bugs in bespoke software development than anticipated and a lot harder to fix. Initial suggestions included:

  • Look at risk management practices. However, the organisation had already done this by reviewing a old (2002) downloaded paper looking at risk characteristics.
  • Look at alternative approaches to software development.

It was the approach to risk which intrigued Peter. The quality of the paper was immediately considered. What was the quality of the paper? Has it been peer-reviewed? Is it still current and relevant?

Peter then critiqued the paper. The paper proposed a number of characteristics supplemented by designators; it was quickly observed that there was considerable overlap between the designators. The analysis of the data was across a number of different sources although no indication of what the counting rules are (and no indication if they were rigorous and consistent). The designators were not representative of all risk factors that may affect a development and said nothing about their relevance to the size of development. The characteristics focused on cultural issues rather than technical issues - risk characteristics should cover both. Just counting risk occurrences does not demonstrate the impact that the risk could have on the project.

Turning to the conclusions, Peter considered if the conclusions were valid. What would happen if you analysed the data in a different way, would the conclusions be different? Can we be assured that the data has been analysed by someone with prior experience in software development? It was observed that designators were shaped to criteria which is appropriate, but one size doesn’t fit all. Only by analysing the data in a number of different ways can the significance of the data can be established. It can also show if the data is not balanced which can in turn lead to skewed results. In the paper under review, it was clear that qualitative data was being used quantatively.

Peter concluded by stating that by ignoring simple objective measures can lead to the wrong corrective approach which might not be appropriate to their process and product. This is because ‘you don’t know what you don’t know’. It is essential to formally define what to count (this is a metric) with an aim to make the data to be objective. Whatever the method for collection, it must be stated to ensure that it is consistent.

The talk was very informative and left much food for thought. I have always aimed to try and automate the collection process to try and make this consistent. However this does nothing if the data is interpreted incorrectly or inconsistently. It is also difficult to know if you are collecting the right data but that is what experience is for!

Sunday, September 20, 2009

Open Source Certification







I have just been browsing the relaunched website of the British Computer Society and came across an interesting article on Open Source Certification. Now there are some pretty important and successful open source applications out there, but there is limited experience of 'certification' in the same way that you can be become, for example, Microsoft certified. Red Hat does offer some courses for you to become a Red Hat Certified Engineer (RHCE) but this is an exception for open source applications.

The big question is does it matter? It all depends on your point of view regarding certification. Does the fact that a product is 'certified' make it a better product? Does the fact that an engineer is 'certified' make him a better engineer compared to one who isn't? As in all cases it depends. A certified engineer should certainly have independently demonstrated a degree of competence in using or configuring a product. However certification without experience to back up the qualification is no use to anyone. Similarly a certified product might demonstrate that the product has become too large and cumbersome that it really needs to be entrusted to a select band of engineers who have demonstrated that they understand the product better than those who have just learned to tame the product to met their specific requirements. A certified engineer should also probably be aware of a few tricks and tips which are not widely known.

So should all open source products offer a certification programme? In my view, no. However there is clearly a point at which certification becomes necessary or expected by the customer community. I would suggest that this can occur in a number of cases:
  • When the product is becoming widely accepted as one of the market leaders across multiple platforms.
  • When the product is now developed on 'commercial' lines with a funding line.
In either case, a professional certification programme should be promoted and managed, but recognizing that significant experience of a product should be automatically rewarded (on request) with certification, particularly if the experience has been gained through the formative years of the product.

A similar approach was adopted a few years ago by the BCS when it launched the Chartered IT Professional (CITP) qualification. To date, this has yet to become a widely accepted, recognized (and demanded) qualification for key roles within the IT industry. Until recognized qualifications or certifications within the IT industry become a pre-requisite for certain roles, the certifications people achieve will be little more than another the certificate to put on the wall or in the drawer. Until this is the case open source certification will become little more than a commercial exercise in raising funds for future product developments.

Friday, June 6, 2008

Assessing code quality

How do you assess a software modules's quality? It is a question I have been struggling with for some time as I try to perform a peer review of a large code base.

Over time, a software module evolves from its intended form to something less than beautiful as bugs are discovered (and fixed) and enhancements over the original requirements are implemented. This is particularly true for code which is developed on a multi-person project, where personnel can change and often a module gets changed by different engineers. Although I adhere to the rule, that the code structure should reflect the original author's style (and how many people change the comment at the top of the file to identify that they have been one of the author's? This assumes that this information isn't automatically added by the configuration management system.), it can become increasingly difficult to make changes.

So what is the best way to assess code quality throughout it's development?