To Software Software - To Software-Assessment - Software/Assessment
Minutes of the MiniTOP on the 2010-03-16
Setting
The MiniTOP was held via phone.
Participants:
- Markus Warg
- Ulrich Schröter
- Michael Tänzer
Minutes
- Markus reported the current status of the Git repository (git://git-cacert.it-sls.de/cacert.git):
- Repository has been set up and is accessible for the public, Software Assessors get push access
- Now we should start using it to catch errors from the usual clash of theory vs. practice (i.e. commit the first patches)
Michael said he will develop a workflow documentation which will reside on Software/DevelopmentWorkflow (work in progress)
- Uli said that there was the desire from Wytze (who didn't take part in the telco) to include emergency patches in the workflows
- Michael stated that emergency patches shouldn't be hard to do and could be integrated in the normal workflow (just a few differences to the non-emergency procedures) and that he will include it in the workflow documentation.
- Somehow the topic changed to the test environment
- There should be a Test Management System (TMS) that allows testers to easily create the environment they want to test (Assurance Points, Flags, etc.)
- Michael noted that the influence of the TMS on the system under test should be minimised so ideally it would act as a normal user of the system (the normal web front end) but that may be too complicated to realise at the moment, so we should stick to database access
- We discussed several ways how accounts could be created:
- There could be an option in the TMS to create a new account with a specified amount of Assurance points (this is the variant preferred by Uli as it's simple to use)
We could use the normal create an account procedure in the web interface, Assurance Points can then be issued over the TMS -> outbound emails have to work (preferred by Michael as it has the least influence on the system under test and account creation will be tested too, has drawback if you need to set up multiple accounts -> probably both options should be available)
- There should be a generated basic user base available (dummy accounts) to make performance testing possible (consensus)
- outbound mails should be possible (this is one of the points where test1.cacert.at fails)
- should be done with minimal changes to the source
Markus: there's no one point where you just change a line in the source and then it works -> try not to touch the source at all
- Proposal:
- redirect all outbound connections to port 25 to localhost per firewall rule
- have a mail server running on localhost
- tell the mail server to receive all mails
- use a webmailer to show all mails
- discussion whether everyone should be allowed to read all mails sent from the test system (Michael: no problem it's just a test system no one should enter critical data, Markus: a opponent of CAcert could use this for bad things (denial of service attacks if I remember correctly), Agreement: we could just try it and fix it if it really breaks)
- Performance Testing
Markus: test system is a virtual machine -> very different performance to the live system -> performance test may not provide useful results
- Michael: not if you take the absolute numbers but if you compare e.g. the status quo to the change a patch introduces you may make decisions based on the results
- Generally: scalability estimates (how many users until the system breaks) and stress tests (many user actions in parallel) not very "testable" because of VM vs. real hardware, we probably have to guesstimate. This is one of the areas where the live system will have to pose as test environment (sad but true)
- How should the Test Team be coordinated?
There should be a Test Team Leader (yay, yet another t/l )
- shows "we really take testing serious"
- does recruiting
- How to make testing attractive?
Testing is generally a boring task especially if there's been a long time since you found the last bug (because hopefully our patches are perfect and there are no errors to spot ) -> missing instant gratification
- Uli: we need constant recruiting to fill in drop outs ("Testers, testers, testers" – remark of the editor)
Michael: have a team of say 30 people and choose 5 of them to do each weeks testing -> you only need to do one week of testing every six weeks, avoids boredom of testers having to do the same things each week
- How to document testing and the results of it?
- Wiki page showing the patch of the week to patch
- Wiki describes the procedure (How to test?)
- How to report test failures?
- In the wiki (Michael: doesn't keep the relation between patch under test and error spotted, Uli: maybe to complicated for testers, we want testing to be something everyone is able to do)
Michael: How about documenting it in the bug tracker? -> keeps the history of a patch/bug, is reasonably simple to use (probably need to validate that claim by doing some user testing) one stumbling block could be that the user has to create an account on the bug tracker -> allow anonymous reporter access
- Amount of tests
- Michael assumed that more than one patch would be tested at once
- Markus corrected him on that matter: there will be just one patch of the week which will be tested at a time
- Michael expresses his concerns that this means only 52 patches per year, we probably have more than that even if quite a few of them will be small, but leaves this decision to the Assessment Team, we will notice if we get significantly more patches to test than we get through testing
- Who gets to do the TMS?
- Michael: I will ask a community member who runs a hosting company and showed some will to help but didn't know how
- Markus: If nobody volunteers I could do it if someone provides me with the info of how I need to modify the database to get the job done
- We need to have a basic version which at least can issue Assurance Points and set the Admin flag to start testing ASAP. Other functionality may be integrated as we go