Gousios.gr

Call for Quality: Open Source Software QualityObservation 1 Quality Team, KDE e.V. {groot,sebas}@kde.org2 Sirius Corporation Ltd. [email protected] Athens University of Economics and Business [email protected] Summary. This paper describes how a Software Quality Observatory works toevaluate and quantify the quality of an Open Source project. Such a quality mea-surement can be used by organizations intending to deploy an Open Source solutionto pick one of the available projects for use. We offer a case description of how theSoftware Quality Observatory will be applied to the KDE project to document andevaluate its quality practices for outsiders.
Open Source software, software quality evaluation, static code analysis The software development process is well known as a contributor to softwareproduct quality, leading to application of software process improvement asa key technique for the overall improvement of the software product. Thiscan be said for any form of software development. Within the Open Sourceparadigm, the leverage of software quality data can be as useful for the endusers as it is for the developers.
From the perspective of a potential user of a piece of Open Source software (OSS), it can be very difficult to choose one of a myriad solutions to a givenproblem. There are often dozens of Open Source solutions which “compete”for users and development resources. They may differ in quality, features,requirements, etc. By making the quality aspects of a given project explicit,it becomes easier for the user to choose a solution based on the quality of thesoftware. Here the Software Quality Observatory (SQO) can play a useful rolein quantifying the quality of processes employed by a given OSS project.
Authors Suppressed Due to Excessive Length With ever increasing numbers of projects and developers on SourceForge (www.sourceforge.net), it is clear that the OSS paradigm is of interest tothose wishing to contribute to the creation of software. By using scientificallyobtained software quality data, such as that which the Software Quality Ob-servatory will produce, it may be possible to encourage similar growth withinthe OSS user community.
2 The Benefits of Software Quality Observation As participation has grown in Open Source development over the past decade,so too has the user base of the software grown. Increasingly OSS is beingviewed as a viable alternative to proprietary (closed source) software, not justby technically-aware developers, but also by non-developers. European re-search projects, such as COSPA (www.cospa-project.org/) and CALIBRE(www.calibre.ie), have raised awareness of OSS development through spe-cific targeting of public administration bodies and industrial organisations,especially small and medium enterprises (SMEs).
As the OSS paradigm makes progress within these organisations any po- tential software procurer is tasked with some important questions which, cur-rently, cannot be answered with any real assurance: Many OSS projects are very similar. How do we choose between them?Which is the most appropriate system for the company’s IT infrastructure? How can we distinguish the “good” and “bad” projects? How can we reason about the quality of a software product in order totrust its future development? Unfortunately these organisations often have nothing more than word-of- mouth on which to base their judgments of OSS products. With 109,7074projects currently hosted on SourceForge it is understandable that productsof excellent quality may be overlooked. It is possible to supplement the word-of-mouth tradition with some rudimentary data that is available from hostingsites: download numbers, project activity etc. Unfortunately this data is easilyskewed and can present a product in an inaccurate manner.
Quality can be a very subjective measure of many aspects of a system in combination: suitability for purpose, reliability, aesthetic etc. Software qualityis formally defined by the ISO/IEC 9126 standard as comprised of six char-acteristics, but no measurement techniques are defined. It has been suggestedthat the external quality characteristics of a software system are directly re-lated to its internal quality characteristics. It is therefore possible to evaluatethe quality of software through its source code and a of project by consideringother data sources intimately related to the project’s code such as bug-fixdatabases or mailing lists.
4 Data from the FLOSSMole Project, 02/12/05.
Call for Quality: Open Source Software Quality Observation In the long run it is crucial to OSS developers and their projects to know quantitatively what the quality of their product is. The volunteer nature ofOSS makes “managing” such a project to include quality control a matter ofmotivating volunteers to behave in ways consistent with improving quality[2].
By fully understanding their software quality, OSS developers are able topromote and improve their products and process. It is also crucial in helpingend-users making informed decisions about software procurement.
3 Why SQO of Open Source Software differs from thaton Closed Software There are two aspects that play a role for quality assessment of software,the quality of the product itself and the quality of the product team. Themain differences between quality assessment (QA) of Open Source softwareand QA of closed source software naturally relate to the availability of thesource code and the transparency of the development process. Third partyquality assessment is facilitated by the availability of the source code and theopenness of the development process.
Quality assessment of OSS software is usually much more transparent than that of closed source software, at least to quality observers on the “outside”[2]. Most OSS projects use an Open Source tool-chain to create their software.
Those tools, compilers for example, have considerable influence on the qualityof the products and therefore need to be taken into account when assess-ing the quality of a piece of software. Furthermore, discussion about qualityissues often happens in public, on mailing lists and message boards, whichadds transparency. Third-party quality assessment of closed source softwareinvolves guessing in most cases.
The number of open bugs might give another impression of the quality of a product. This number is to be taken with a grain of salt since the numberof bugs might indicate that there is a lot of testing, or that there are a lot ofpeople reporting bugs. The type of bugs, response times and their frequencyis important. Merely counting the number of bugs reveals more about thecommunity behind the product than about the product itself.
The number of code check-ins gives a good idea of the activity level of the development of the product. Products that receive a lot of attention from de-velopers are likely to be fixed faster than products that have been abandoned.
A product can be very actively developed, but that might also indicate thatit is unstable and many changes are being made which increase the amountof effort needed to assess and maintain a certain level of quality.
Assessing the product team is another aspect where quality assessment of OSS products differs from QA on closed source software. The term ProductTeam refers to all participants in the project, engineers, documentation team,translators, and of course QA people [3]. In closed software products, thenumber and skill level of developers is usually kept secret by the company, Authors Suppressed Due to Excessive Length the number of participants in an OSS project can at least be estimated byeducated guessing, based on commit logs and the source code itself.
The size of the team is an important issue to examine the longevity of the product, and thus the chance to have the product supported in the future.
The Open Source Maturity Model (OSMM) [2] uses team size explicitly as anumeric indicator of quality.
The automated analysis of source code as a quality measurement is not a newconcept. In recent years, the growth of OSS development has provided a wealthof code in which new techniques can be developed. Previous work in this areais often based in metric analysis: statement count, program depth, number ofexecutable paths or McCabe’s cyclomatic complexity [5] for example. In theirwork using on metric-based analysis Stamelos et al. [7] observed good qualitycode within Open Source. Other techniques, such as neural networks [4] arenot only capable of evaluating code, but also in predicting future code quality.
The Software Quality Observatory aims to provide a platform with a plug- gable architecture as outlined in figure 1 for software development organisa-tions that will satisfy four objectives: Promote the use of OSS through scientific evidence of its perceived quality.
Enhance software engineers’ ability to quantify software quality.
Introduce information extraction, data mining and unsupervised learningto the software engineering discipline and exploit the possible synergiesbetween the two domains using novel techniques and algorithms.
Provide the basis for an integrated software quality management product.
SQO-OSS is based around three distinct processing subsystems that share a common data store. The data acquisition subsystem processes unstructuredproject data and feeds the resultant structured data to the analysis stages.
The user interaction subsystem presents analysis results to the user and ac-cepts input to affect the analysis parameters. The components of the dataacquisition subsystem are responsible for extracting useful data for analysisfrom the raw data that is available from the range of sources within softwaredevelopment projects. Metric analysis of source code is well-known and animportant aspect of this system. Repository analysis will perform examinethe commit behaviour of developers in response to user requests and securityissues. The information extraction component will extract structured informa-tion from mailing lists and other textual source in order to feed higher-levelanalyses.
The data mining component will use structured information from project sources to predict the behaviour of the project with respect to quality charac-teristics and classify projects according to their general quality measurements.
The statistical analysis component will apply statistical estimation models in Call for Quality: Open Source Software Quality Observation Fig. 1. A schematic representation of the proposed system order to predict events in the development life-cycle that can have an impacton the product’s quality.
The KDE project (www.kde.org) is one of the largest desktop-orientedprojects in the world. Its scope encompasses the entire desktop (i.e. end-user use of a computer, including web-surfing, email, office applications, andgames). It is a confederation of smaller projects all of which use a single plat-form (the KDE libraries) for consistency. The project has some 1200 regularcontributors and many hundreds more translators. The source code has grownto over 6 million lines of C++ in 10 years of “old-school” hacking.
KDE’s quality control system has traditionally been one of “compile early, compile often.” By having hundreds of contributors poring over the code-baseon a wide range of operating systems and architectures, bugs were usuallyfound quickly. Certainly most glaring deficiencies are quickly found, but moresubtle bugs may not be.
In terms of formalized quality control, there is a commit policy which states when something may be committed to the KDE repository [1], but this doesnot rise much above the level of “if it compiles, commit it.” Only recently hasa concerted push been made for the adoption of unit tests within the KDE li-braries. Adoption of the notion of writing unit tests has been enthusiastic, but Authors Suppressed Due to Excessive Length there are questions of coverage and completeness. Automated regression test-ing is slowly being implemented, but here the lack of a standardized platformfor running the tests hampers the adoption of those automated tests.
Documentation (user and API) quality has become an issue, and quality measurements are now done regularly. User interface guidelines have been for-mulated, but not enforced. Once again, there is an effort underway to measure(deviations from) the interface guidelines. This produces discouraging num-bers, and has not yet been successfully automated in a large scale manner.
The KDE project expects the Software Quality Observatory to extend and enhance the quality measurements which it has begun to implement, inorder to guide the actions of the KDE developers. Whether the availabilityof quality metrics for the code base has an effect on the “average” volunteerdeveloper remains to be seen — experiences with the existing tools suggeststhat fixing bugs found by automatic techniques does not score high on the“fun” chart for developers. For the core KDE developers (of which there areperhaps 100) the existence of the quality metrics produced by the SQO mayguide their efforts in bug fixing and yield more productive code freezes priorto release.
Software quality observation has long been performed as a crucial elementin software process improvement. However, established methods of qualityobservation have mostly focused on source code and overlooked other availabledata sources e.g. mailing lists or bug fix data[6].
Many OSS projects, such as KDE, have established processes for the main- tenance of software quality. However, these can only be of limited use whenthen actual quality of the product is still unknown. By scientifically evaluatingthe quality of a software product and not the process, software engineers canleverage this knowledge in many ways. By providing this quality evaluationthe SQO-OSS system will allow engineers to make informed choices when ad-dressing their development process and allow them to better maintain qualityin the future. The developers and their supporting organisations can also usethis evaluation to promote their product. This is especially crucial within theOSS world, where there is a wealth of choice.
Ultimately, the SQO-OSS system will aid OSS developers to write better software and enable potential users to make better informed choices.
1. KDE Developer’s Corner. KDE commit policy. On http://developer.kde.org/.
2. Bernard Golden. Succeeding with Open Source. Addison-Wesley, 2005.
Call for Quality: Open Source Software Quality Observation Quality Management for Products and Programs.
4. R. Kumar, S. Rai, and J. L. Trahan. Neural-network techniques for software- quality evaluation. In Proceedings of the Annual Reliability and MaintainabilitySymposium, 1998.
5. T. McCabe. A complexity measure. IEEE Transactions on Software Engineering, 6. Diomidis Spinellis. Code Quality: The Open Source Perspective. Addison-Wesley, 7. Ioannis Stamelos, Lefteris Angelis, Apostolos Oikonomou, and Georgios L. Bleris.
Code quality analysis in open source software development. Information SystemsJournal, 12(1):43–60, January 2002.

Source: http://www.gousios.gr/pub/open-source%20software-quality-observation.pdf

Written order from an authorized prescriber/parent’s permission

Phyllis Bodel Childcare Center at Yale School of Medicine, Inc. Written Order from an Authorized Prescriber/Parent’s Permission If a Child Day Care Center, A Group Day Care Home or a Family Day Care Home chooses to administer medications, the Connecticut State Law and Regulations require a physician's, dentist's or advanced practice registered nurses' written order and parent or guardia

Antidopingsostanze.doc

CODICE ANTIDOPING DEL MOVIMENTO OLIMPICO Appendice A - Lista delle classi di sostanze vietate e dei metodi proibiti 2003 1° GENNAIO 2003 I - CLASSI DI SOSTANZE VIETATE A. STIMOLANTI a Le sostanze vietate della classe (A.a) includono i seguenti esempi con entrambi i loro isomeri L e D: amifenazolo, amfetamina, bromantan, caffeina*, carfedon, cocaina, efedrina**, fencamfamina, mesocarb, pe

Copyright © 2018 Medical Abstracts