To AGM - To AGM/Next - To AGM TeamReports Overview - To AGM Members Reports Overview
Team Reports 2014
Team Leaders are encouraged to present a report for their team.
1.
{g}
2.
{g}
3.
{g}
4.
{g}
5.
{g}
6.
{g}
7.
{w}
8.
{g}
9.
{g}
10.
{w}
11.
{w}
12.
{w}
13.
{w}
14.
{w}
15.
{g}
Policy Group
For the first part of the year there was little activity on the Policy Group. At the beginning of 2014 there were some mails on different issues which did not rise to a long discussion and did not lead to a policy or policy change.
One of those points has to be addressed in the next year, as there was an Arbitration ruling, that it should be addressed. (DRP change for appeals on running cases.)
About the time the Policy Officer was appointed in February a quite lively work on CCA started. This was done by nearly 400 mails on the policy group mailinglist(396 if counted correctly) from 22 participants. (And definitly some more communication "in the background")
Currently the updated version - combining 12 proposal - is voted on as p20140709. The vote is open until 2014-07-27. All Community members are invited to participate.
There were also single other policy proposals. One of them for DRP (see above) got enough support to be attended in the near future, to not mix it with the big CCA change. Some other proposals were placed privately in the hands of the Policy Officer, to bring to the discussion at an appropriate time after the CCA update is finished.
All together there were 490 mails on the policy group mailing list. 94 of them not related to the CCA update discussion. Only 12 in 2013.
There was only one policy decision in July 2013 - June 2014:
p20140427 Eva for Policy Officer. It was carried with 11:0.
The biggest change regarding the team itself was that Eva Stöwe became the Policy Officer, both by a board motion and by a policy group motion.
Different members joind or left policy group, but this was not specially tracked as it is of no relevance for the funktion of the group. At 2014-06-30 there were 269 email-addresses subscribed to policy group. Since then there were 5 new subscriptions. It is known that some people have joined the list with multiple email addresses.
Audit Team
In November 2013 board issued a motion to appoint a new internal Auditor after years of audit absence.
In the Auditor's first half year in office following action have been undertaken:
an Incidents Page for Information Security and Privacy breaches has been created
- 4 Incidents have been handled
a new Audit Landing Page has been created, outdated Audit pages updated or archived
- an Audit Programme for 2014 - 2016 developed and moved by board
an Audit Plan for 2014 has been created
Due to the Incidents, advising Arbitration, and necessary Policy activities related to Audit, planned audit progress is behind schedule; only one (Arbitrated Background Check Process Conformity Check) out of three activities has been started.
Incident Types in FY 2013 / 2014 |
|
Type |
Amount |
Loss of Credentials |
1 |
Abuse of power |
2 |
Data Privacy Breach |
2 |
To build a strong audit capability, further internal Auditors are welcome to participate in the Audit Team.
Benedikt Heintel
CAcert Lead Auditor
Arbitration
Case statistics
The first table shows an overview over the state of the arbitration cases at the end of June 2014. (The last column header is short for July 2013 - June 2014)
Arbitration activity per years |
|||||
|
created |
open |
in work |
closed |
closed in 2013/14 |
2007 |
4 |
4 |
0 |
4 |
0 |
2008 |
4 |
4 |
0 |
4 |
0 |
2009 |
108 |
5 |
4 |
103 |
0 |
2010 |
111 |
17 |
17 |
94 |
2 |
2011 |
109 |
18 |
15 |
91 |
13 |
2012 |
55 |
25 |
10 |
30 |
13 |
2013 |
25 |
14 |
6 |
11 |
9 |
2014 |
33 |
28 |
14 |
5 |
5 |
Currently there are 2 of the cases from 2014 untouched in the OTRS, waiting for an initial CM to initiate them. At the end of June 2014 this were 7.
At the end of the business year there 7 Cases was untouched in the OTRS.
The next table gives a more detailed overview about arbitration activities over the course of the report-year.
Arbitration cases opened and closed in July 2013 - June 2014 per month |
|||
Month |
opened |
still open |
closed in this month |
July |
2 |
1 |
1 |
August |
1 |
1 |
0 |
September |
3 |
3 |
1 |
October |
1 |
0 |
0 |
November |
4 |
1 |
22 |
December |
5 |
2 |
4 |
January |
10 |
10 |
4 |
February |
3 |
2 |
3 |
March |
5 |
3 |
3 |
April |
8 |
6 |
1 |
May |
2 |
2 |
0 |
June |
6 |
6 |
4 |
In the first 4 months, there was little activity in any direction. In November there is a drastic increase of closed cases, while in January there was a peak of new cases. 10 cases is quite high - even compared to times where all delete account cases were treated by arbitration (which are mostly done by support, now).
If one takes this into account, the rate of incoming disputes is quite high compared to former years, since about the beginning of 2014.
At the same time the rate of closed cases could also be increased compared to the previous reporting year. In 2014 alone we could closed as many cases as were closed between July 2012 and June 2013 (19). In November 2013 even more cases were closed than that (22). This was thanks to a small "task force" which addressed most of the open "delete assurer account"-cases.
Overall 43 cases were closed in the report year, while 50 were opened.
Overview over the new cases per subject
The new disputes filed in July 2013 - June 2014 can be roughly categorised as:
- 7 CCA violations
- 2 informing members
- 7 assurance issues of any kind
- 1 infrastructure / support issues
- 6 SQL query for DB analysis / fixes
- 4 death cases
- 7 account issues
- 3 close assurer account
- 1 ABC (interview audited)
- 3 cases based on software problems of any kind
- 1 check for power abuse against a member (driven by auditor)
- 1 exclude member from CAcert Inc.
closed cases with publicly relevant Arbitration decisions
Below is a list of cases that were closed between July 2013 and June 2014 with subjects or rulings that may be of general interest, together with a short description. (Please read the original cases/rulings, if you want to refer to anything here. This are only attempts to summarise, which are open to false interpretations.)
Appeals
a20110310.1 - appeal against a20100212.2
Appeal against a ruling in death-case that all R/L/O fall back to CAcert after the death in that case.
This appeal was rejected.
Policy Group was asked to include the death of a community member into the policies,
link to case a20110310.1
Precedents Cases
a20101025.1 - removal of posts from mailing list
Posts to CAcert mailing lists may be removed, if
- the original author requests removal with a valid reason
- anyone claims his/her personal data was published and author agrees to the deletion
- it is an emergency action (to be confirmed by Arbitration)
- based on an Arbitrator ruling.
ABCs
a20130125.1 - ABC on Jochim S
Congratulations to Jochim S passing the ABC.
As Jochim S is not a person required by the SP to get an ABC, the case was also about the question if other persons then those mentioned in the SP may or should get an ABC.
The ruling was, that yes, the ABC is open to other CAcert personal as well, but it is not required. link to case a20130125.1
SQL queries done on the DB
a20131210.1 - Find out some information about when accounts where created
Support was allowed to request the first member id of each year.
The reason is that for some cases it is relevant to know at what time a member joined to decide which policies may have applied to the member (CCA, DRP, AP, ...). This is relevant in some support cases and often in arbitration cases. link to case a20130125.1
a20131207.1 - Request for Analysis of Data Consistency
The following questions about assurances entered in the Data Base were allowed and answered:
- Do all user account IDs mentioned in the database actually exist?
- in 588 assurances the assurer does not exist
- in 395 assurances the assuree does not exist
- Do all user accounts mentioned as the granter's user account ID have the assurer flag set?
- in 37989 assurances the assurer flag of the assurer is currently not set.
- side note: in 2009 the assuere flag was removed for some assurer because of changed requirements to become and assurer in the AP
- in 37989 assurances the assurer flag of the assurer is currently not set.
- Is none of the user accounts mentioned as the granter's user account ID marked as deleted?
- In 138 assurances the assurer is marked as deleted.
- Are there assurances granting points to user ID 0?
- no
- Are there assurances granting points by user ID 0?
- yes, 556.
- Do all assurances specify a value for the method that is documented and covered by a policy applicable at the time the assurance was made?
This created a bigger table, see for it in the case file: a20131207.1
a20131124.2 - SQL Request for analysing assurance data for wrong entries
A support member stumbled over assurances with extremely high assurance points. He asked about the number of assurances with assurance points outside of the currently allowed range 0 - 35. It was allowed to assure more than 35 points in some cases before the current Assurance Policy was in place.
This was answered by a sql-query. The result can be seen in the case file a20131124.2.
Later the 8 most extreme assurances that were always outside of the normal range were brought in line with the current AP after contacting the assurer and assurees and asking for their OK.
Since then we do not have any assurances with negative assurance points or more than 150 assurance points. link to case a20131124.2
Clarifications around assurances
a20121228.1 - Abuse of position
The case originated around an incident at 29C3. However in the course of the case a lot of questions were raised by the claimant and a respondent to clarify some core questions about assurances.
The answers from the Arbitrator were in short:
The Privacy Policy (PP) is referenced by the Assurance Policy (AP). It applies for every assurance.
Any assurance process begins with the agreement of the parties (assurer, assuree) to undergo an assurance process based on Assurance Policy. By this (and by signing according statements on the CAP form) every assurance is based on the AP. There is no need to specially state this before the assuree.
- As PP is part of every assurance and every assurance is covered by AP (and through this PP) right from the beginning, CAP forms fall under the PP right from the start of any assurance process. So even if assurances are not entered, the CAP forms have to be protected against unauthorised access by third parties.
- The rule of thumb about the precautions to be taken here is: "Use your brain and look out to find the best safety for your CAPs in the given situation" It is hard to state more, because it depends on the situation. But "Keep the personal information covered up, when you do not need access to it right now." could be seen as a baseline requirement.
- If a community member detects filled CAP forms lying around, they may move them to a safer location with the intention to hand them back to the owner as soon as possible. (Side note: this does not allow for intentional privacy breaches by the finder.)
a20140204.1 - Clarify validity of a Passport with fake stamps for assurances
Fake stamps in passport do not change the validity of the passport as a document for an assurance. link to case a20140204.1
Potential abuse of power
a20121228.1 - Abuse of position
The case was filed because of actions done by a member of the arbitration team and an event organiser at 29C3.
The actions were found to be in accordance with our policies. No abuse of power or other greater misbehaviour were found on the side of the respondents. link to case a20121228.1
a20140324.1 - Arbitration against the removal of Dominik G from all mailing lists
The case was filed by the Auditor because of a mail that indicated that a person who resigned was removed from all mailing lists.
This indication was found to be false. The person was only removed from lists he had asked to be removed from. The author of the original mail clarified his mail openly. link to case a20140324.1
Activity of the team members
During the report period one CM left the team and one CM/Arbitrator joined the team.
Arbitration cases closed in July 2013 - June 2014 per CM / Arbitrator |
||
Name |
CM |
Arb |
1 |
6 |
|
2 |
2 |
|
14 |
15 |
|
1 |
0 |
|
1 |
0 |
|
13 |
14 |
|
4 |
0 |
|
2 |
3 |
Beside of this there was naturally also activity in other cases, which could not be closed until now, but it is hard to sum this up in a meaningful manner.
other activities
There were some changes to the Lessons:
- Lesson 4 (Initialising new Arbitration case) was reviewed completely and streamlined.
- Lesson 81, Lesson 80 and Lesson 26 together with Appendix02 were created.
- some others got editorial additions
Two cases / rulings initiated discussions in Policy Group and are waiting for decisions, there. It looks like they will lead to changes in the CCA and DRP.
During the last report-year there were regular arbitration meetings. They stopped somewhere in 2013. There was one attempt for an arbitration team meeting which was set up by the DRO but it was cancelled because of insufficient people able to attend.
Additionally there were some irregular arbitration working sessions set up by team members.
The "in training" flag of Arbitrators was removed by a new training definition set up by the DRO.
Achievements
Achievement unlocked: "Power-Month: Close as many cases in one month as were closed during the rest of the year."
Achievement unlocked: "The Incident Response Team has worked well" (shared with other teams)
Critical System Administrator Team Report July 2013 - June 2014
Hardware changes
A major change was made to the hardware infrastructure for CAcert servers by replacing the obsolete bulky firewall boxes managed by Tunix by a new internal design from the critical system admin group with strong support from Stefan Kooman (secure-u access engineer). The new 1U solution is based on two alix2d1 boards donated by Systemhouse Mobach BV, running OpenBSD with pf as the main firewall. The second board is configured for live failover in case the first board fails. In conjunction with this new firewall setup, the switch setup has also been rationalized by reducing the number of boxes in use (save power!), and leaving a fully configured cold spare on-site in case of failure of the primary switch. Many thanks to Stefan Kooman for his excellent work on building and configuring the new firewall!
Another hardware change was the phasing out of the infra01 server (based on Sun X4100 hardware) in favour of a new infra02 server donated by Thomas Krenn. The new server has much more cpu, memory and disk capacity than the old one, yet uses significantly less electrical power, and thus contributes to the financial sustainabiliy of CAcert's operations. The old server is still available as a cold spare in case the webdb/sun2 hardware breaks down.
However, the physically most demanding change was the move from the old full-height cabinet to a new half-height cabinet. By getting rid of old, bulky and/or power-hungry equipment, it was possible to fit the entire CAcert hardware infrastructure in much less space. After some negotiation with BIT, they were willing to provide us with an even better (cheaper) deal for the hosting than the previous one, while even increasing the available bandwidth. Many thanks to BIT for this excellent offer!
A major organizational change with respect to hardware was the transition from Stichting Oophaga (now defunct) to secure-u e.V. Besides relabeling, this change necessitated some updates to the CAcert Security Manual. The amount of feedback that we received in the past from Oophaga on billing was about zero; unfortunately, this has not really improved with secure-u as the new hardware owner and hosting contractor.
The only failing component during the reporting period was one of the (two) disks of the signing server. This disk has been replaced by a new 80 GB SATA drive donated by Systemhouse Mobach BV.
On-site activity
The log of visits to the hosting facility shows the following "on site" activities:
- [27.08.2013] restore operation of CAcert signing server after disk error
- [29.08.2013] replace disk of CAcert signing server and restore operation
- [05.09.2013] check completion of signer disk shredding and start another one; fix time of signing server
- [20.09.2013] check completion of signer disk shredding; reset sun3ilo
- [14.11.2013] install replacement cisco switch and update cabling; remove old switches
- [12.12.2013] migrate all CAcert equipment from old to new cabinet; replace Tunix firewall by our own; remove old equipment
- [06.05.2014] reboot firewall and install cabling for controlled remote access to the serial consoles of the firewalls; power off infra01; move USB backup disk from infra01 to infra02
The total number of visits (7) was exactly the same as in the previous year (7), and about 3 of these 7 visits could be labelled emergency visits. These two consecutive visits, on 27 & 29 August 2013, were caused by an intermittent failure of the primary disk of the signing server. This has been resolved by replacing the disk and shredding the old one.
Off-site activity
All other (i.e. most!) system administration work has been performed remotely. Issues directly affecting the operation of the webdb server continue to be logged to the cacert-systemlog@lists.cacert.org mailing list (archived at https://lists.cacert.org/wws/arc/cacert-systemlog ) with headings like "configuration change webdb server", "security upgrades webdb server" or "cvs.cacert.org checkin notification". This logging is also used for changes to all other services like DNS, OCSP etc. under critical-admin management. A total of 139 messages were posted on this mailing list during the year.
Webdb server
Since the support for the Debian "Squeeze" (oldstable) release employed on this server ended at the beginning of June 2014, an upgrade process to the Debian "Wheezy" (stable) release was started. On June 24, the main server environment was upgraded to this release. Unfortunately, there are some issues with the CAcert application code running inside a chroot environment, which are currently blocking the upgrade of this chroot environment to the same release. As soon as these issues have been resolved by the Software Assement team, we will also upgrade the chroot environment. Important benefits to be realized with the Debian Wheezy release in the CAcert chroot environment, are a new version of openssl with up-to-date protocol support and newer Apache2 and PHP releases.
Other maintenance work on the webdb server during the reporting period involved:
- 28 installations of one or more Debian security updates
- 10 configuration changes
- 58 application software patch installations
- 1 certificate renewal
thus making a total of 97 critical admin interventions for this server (previous year: 66).
DNS service
The DNS service has been continued in more or less the same configuration as the previous year. A remarkable change was adding IPv6 support for our master name server, the only one still lacking such support. This was finally possible thanks to the firewall replacement discussed elsewhere in this report. Maintenance activities for this server boiled down to:
- 1 DNS software version update
- 7 configuration changes
- 1 installation of one or more OpenSuSE security updates
- 1 Key Signing Key rollover (for each of 3 zones)
- 36 zone file changes
thus making a total of 46 critical admin interventions for this server (previous year: 20).
OCSP and CRL service
The OCSP service and CRL services have been separated during this year. They are now each running as a separate Xen guest. While this entails a bit more OS maintenance work, the configuration advantages of this setup outweigh that disadvantage.
A rate limit has been implemented on the CRL server by configuring traffic control for this service. This was necessary because BIT has removed its obsolete (and not totally effective) rate limiting at their aggregated switch level. Unfortunately we cannot run this service without limits as it tends to quickly overrun the agreed bandwidth donated by BIT to CAcert.
We have also modified the mechanism to pull up-to-date Certificate Revocation Lists from the webdb server into the CRL and OCSP servers. These lists are now retrieved with RSYNC rather than via HTTP. Advantages of this new scheme are:
- reduced amount of data transfer (rsync only transfers the differences);
- prevent potential misuse of the special URL to retrieve the master copy (there is no special URL anymore).
We are also offering all CAcert users the option of retrieving CRLs from the CRL server with RSYNC instead of HTTP. Users can thus benefit from much shorter and faster updates, while at the same time reducing our bandwith load. The facility was documented and promoted in a blog entry https://blog.cacert.org/2013/10/efficient-method-for-frequent-retrieval-of-crls/ , but it has seen very little use until now.
Maintenance activities for the OCSP and CRL services boiled down to:
- 1 OCSP software changes
- 2 installations of one or more OpenSuSE security updates
- 4 certificate renewals
thus making a total of 7 critical admin interventions for these servers (previous year: 6).
Last year we reported that the availability of the CRL service had been decreasing over the year. This was attributed to a number of factors:
- the CRLs are growing larger and larger as more certificates are revoked, but all revocations are kept on the CRLs, including those for certificates which have expired;
- the number of consumers for these CRLs is increasing, in particular a number of consumers which attempt to retrieve the CRLs at a much higher frequency than really sensible (once per week should be OK for most purposes);
- the resulting heavy traffic is causing congestion in the external firewall from time to time.
Note that we are routinely pushing out over 150 GB of data *per day* from just this server. During this year the situation appears to have stabilized a bit, partly thanks to the extra bandwidth donated by BIT, and partly thanks to a modest drop in the amount of CRL requests.
Last year we also presented three methods for attacking this problem. Two of these have been implemented during the past year:
- provide an rsync service for retrieving fresh CRLs much more efficiently than with http;
- replace the external firewall by a modern more efficient engine.
The most fundamental method of attacking the problem is still open though, as it entails some fundamental changes in the operation of the CAcert signing server:
- reduce the size of the CRLs by excluding expired certificates.
Backup service
The boxbackup server has also been continued unchanged, with maintenance activities consiting of an operating system upgrade to Debian Wheezy, and a number of smaller interventions:
- 19 installations of one ore more Debian security updates
- 3 configuration changes
thus making a total of 22 critical admin interventions for this server (previous year: 1).
Firewall
The old external firewall was managed and operated by Tunix, as a donation to CAcert. However, the critical admin team was responsible for providing the correct configuration instructions to Tunix for the firewall mgmt. In the past year 1 firewall change request was generated and monitored (previous year: 3). In addition a discussion has been conducted with Tunix regarding the complete phase-out of the firewall in favor of our own.
The design and implementation of the new firewall has taken place over the period June through December 2013, with most activity in the last three months. The main features of the new firewall have already been discussed at the start of the report. Maintenance of the new firewall has boiled down to three key components:
- the pf ruleset - 24 changes
- the relayd configuration (for OCSP and CRL) - 1 change
- the unbound configuration (for internal DNS service) - 5 changes
thus making a total of 30 critical admin interventions for the new firewall.
Until now we have suffered one outage of the new firewall, due to a temporary lack of resources for running relayd. In this particular case, automatic failover was not effective, and manual failover only worsened the problem, thus necessitating a site visit. At that time an interface to the serial consoles of the firewall boards has been installed, so we can solve this problem remotely if it ever occurs again.
Monitoring
Our primary external monitoring remained based on the use of a private server of a critical team member, with the limitations implied by that. Efforts have been made during the past year to install a proper external monitoring system, but so far this has not resulted in a working solution. The Raspberry Pi server donated by Juergen Bruckner was up to now not stable enough to do the job. A second offer made for a well-connected VM has not materialized yet.
Infrastructure support
After receiving the new infra02 server hardware, it has been configured with a basic Debian Wheezy installation by the critical admin team and installed in the new BIT hosting cabinet. After handover to the infrastructure team, very little support has been required from the critical admin team for this server.
Software Assessment Team support
We continued to support the Software Assessment Team by maintaining a test server (on a virtual machine) which looks as closely as possible to the production webdb server. A second similar test server is also maintained for special critical system tests and preparation of major software upgrades. This second test server has been fully upgraded to Debian Wheezy, including the CAcert chroot application environment, and is available for testing.
Both test servers were migrated from an external VMware environment to an LXC container on the new infra02 infrastructure server. Many thanks to Mario Lipinski for arranging and implementing this migration!
The patch process developed by the Software Assessment Team has resulted again in a significant number (58) of successful patch updates to the production server (previous year: 54).
Events team support
From time to time the events team wants to inform CAcert members about important events like Assurer Training Events and the like. These mailings are performed by adding a custom script to the webdb server and running it against the current database. Based on arbitration http://wiki.cacert.org/Arbitrations/a20090525.1, such scripts are prepared by the events team and handed over to the critical admin team for installation and execution. 6 cases were handled in the past year.
One huge mailing was also executed by the critical admin team, for informing the CAcert membership as quickly as possible about the impact of the infamous OpenSSL Heartbleed bug:
The script has been running from April 9, 10:45 until April 10, 18:37 CEST. A total of 168977 messages has been sent out, for a total userid base of 290146 entries.
According to the postfix mail statistics, a total of 170213 e-mails were sent during this period (including regular webdb service mails). For 22414 e-mails out of these delivery problems were reported.
At this moment (April 11, 11:30 CEST) there are still some 3700 e-mails queued for possible delivery later (the regular queue size is more like 50 - 100 e-mails).
Interaction with other teams
From time to time the critical admin team also receives requests from other CAcert teams like Support and Arbitration, which we try to handle as quickly as possible. The total number of e-mails processed or generated by the critical admin team during the reporting year amounts to around 1000.
Team changes
There were no team changes in the past year.
Plans
Plans for the coming year include:
- prepare system software upgrades (Debian Sid, OpenSuSE 13.1)
- improve availability of OCSP and CRL services
- implement performance monitoring for the new firewall
- improve external system monitoring
- expand and improve server documentation
- look for strengthening of the sysadmin team
Wytze van der Raay, Mendel Mobach, Martin Simons
Critical System Administrator Team
Infrastructure
During this financial year, excepting the mail systems, all systems got the urgently required upgrade to the current Debian version, Wheezy. For the mail systems, a completely new setup is anticipated and we are currently in contact with community members who offered their help.
Starting September, we were expecting shortage in disk space. While we could free up some space to keep all services available to community members, we could not always react in time resulting in some service interruptions. Further development of infrastructure was slowed down by these issues. The disk space issues were resolved in January by the migration of all infrastructure services to the new host (see below).
The software systems (the test1 and test2 servers) including the GIT repository used by the software team, were also moved to the new infrastructure host (see also Critical report).
In April we quickly handled the Heartbleed issue by updating affected containers and the host system and issuing new server certificates and corresponding private keys.
Due to personal time issues, infrastructure t/l currently is lacking an overview about the team status. Help is possibly needed in some places and infrastructure t/l should be considered for replacement.
Infrastructure Host
With new hardware, generously donated by Thomas-Krenn.AG, we solved our disk space issues for CAcert Infrastructure that started in September. We changed the setup of the infrastructure host to have separate LVM volumes for each LXC container. With this setup we can avoid issues where one container can exhaust the disk space of other containers. All infrastructure containers were moved to the new host by Jan in January. The new host has a lot more system resources than the old machine which results in better overall performance.
IPv6 support is still an open issue and Jan did not have enough time to investigate routing issues. Help for this issue is highly appreciated.
System Monitoring
State of the Current system
A system based on Debian Stable with etckeeper. Installed monitoring software is Icinga with some nagios scripts. The configuration of the monitoring system started in the end of 2013.
The monitoring Tests include following tests
- HTTP
- HTTPS include SSL Certificate expire date
- Check for operating system and security updates
- Check for database server like mySQL
- Check for mail server services like SMTP, POP3S, SIMAP
- Check for irc server
Systems are now in the monitoring
- Infrastructure Host
- Arbitration
- Blog
- Board
- Issue Tracker
- CATS
- CRL Service form Critical
- IRC
- Ticket System
- LDAP
- Monitoring System self
- Keyserver
- SVN
- Testserver
- Translations
- Portal
- Webmail
- Wiki
Prospects for next year
- build notification groups
- setup another monitoring system for redundancy and reliability
Achievements Unlocked
Achievement unlocked: Total monitoring
New Achievements Available
Notification
Synchronizing Keyserver (SKS)
A new system based on Debian Stable with some essential packages pulled from Testing (the actual service part) has been setup pulling in publicly available PGP keys for verification purposes (to be used when reworking the OpenPGP key signing part of our software). Only a few selected sources (secure-u and one other external server) are allowed to keep tight control on what keys are transferred to this system. With this system additional revocation checks for publicly available OpenPGP keys can be implemented strengthening the overall checks done before signing a key.
The system is not in productive use yet, but will be incorporated when the OpenPGP part of the software is due for being replaced.
Although the system IS publicly available it's not primarily intended for use by everyone; the system has explicit configuration to keep it out of the general SKS pool. If this was to be changed an additional copy of this system should be setup.
Achievement unlocked: Ready before use!
Software Development Team
The software development team including software testers, developers and assessors continued their work to improve and fix the existing CAcert software. Over the course of this year we had constant support from a number of people and near the end of the year gained two valuable developers that will be assisting with major projects of the next year.
Team
Some testers where coming and going while there's a core tester group which continue to work away in each weekly "software assessment" meeting. So tests were done in a timely fashion. Marcus and Magu, who previously were mainly involved in testing the software, now did more development work, preparing changes for later review by the software assessors. Additionally Eva has become active in the testers group and recently started contributing her first patches. With about 2.5 active software assessors, proposed changes get reviewed continuously but there is certainly room for improvement (mea culpa). With the changed workload situation and many arbitration cases that needed handling it was sometimes hardly possible to keep up with the day-to-day work in the bugtracker.
Statistics
Over the past year we have resolved 182 issues while "only" 101 new ones were opened. That means we are down 81 open issues compared to last year. Of these 182 resolved issues, 84 resulted in a patch request to the critical admin team. But we can certainly improve on the average time until a bug gets fixed, which is 1,005 days at the moment. If you need more statistics, just head to the statistics page on our bug tracker.
Achievements Unlocked
- Submit 80 patches within one year
- Add and record CCA confirmation to all relevant parts of the software
- Account History Log
- Improved Collission Resistence when signing using SHA-512 (and others for Debian Compat)
New Achievements Available
- Submit 100 patches within one year
- Use the new point calculation in all relevant parts of the software (ca. 15% complete)
- Support the new roots project
- Load balancing and traffic optimisations for revocation information (OCSP and CRLs)
- Implement the password reset by Assurance in the system
- Rewrite the software from scratch
Rewrite signer and CommModule from scratch
Benny Baumann
Deputy Software Assessment Team Leader
PublicRelations
PR team has issued several press releases, starting from general news
about CAcert, we were very active in early informing about heartbleed
bug, and we had to discuss about Ubuntu's and Debian's deletion of our
root keys in their distribution package by presenting our internal
audit.
Besides simply writing press releases, we share them also on Twitter,
Xing, LinkedIn, and Google+ from where we get nice responses, so these
platforms are important for us.
Next is to find an answer to DANE, a new rapidly evolving
crypto/security standard.
Some good news: Since march, 26th, Alexander and Marcus are proud to
welcome two new team members: Martin Gummi and Benny Baumann.
Our distribution list for press releases could be vastly extended due
to a number of relevant contacts and addresses which were introduced by
Martin Gummi. Marcus Mängel and Alexander Bahlo could get in contact
with personal contacts at Heise and Golem (a renowned german online
news service for IT relevant topics).
Education
Management of CATS and the Assurer Challenge
CATS has been running quite smoothly during the last year, to my knowledge there are currently no significant bugs. But there has also been no progress to new functionality (like improved handling of translations) nor to a new set of questions or new languages. There's still no interface for Education to verify that a certificate applicant has collected 100 Assurance Points, so Support has to be contacted for every certificate request. The usual statistics for the period July 2013 to June 2014:
- 3288 test have been made, 1351 english Assurer Challenges, 1964 german ones and 93 Triage Challenges
- 1701 of the Assurer Challenges has at least 80% correct answers and are therefor counted as passed
- 1196 different users (that is, different certificates used to login) have passed the test at least once
- 233 users tried the test at least once but don't have a successful test recorded
- On the average those who passed the test had about one (more exactly: 0.91, compared to last year's 0.92) unseccessful tries before passing.
Prospects for next year
More or less the same as last year:
- Finish the started translations of CATS test and user in *terface.
- Extend and update the pool of questions for the Assurer Challenge, especially in the area of Arbitration
- Support Event Organisation in improving and extending the present materials for ATEs (see SVN)
- Improve the CATS admin interface so editing questions and answers is a bit more comfortable.
- Improve the CATS database structure and admin interface to give better support for handling questionaires in different languages
While I don't want to drain manpower from the software team, trying some development of CATS may be a nice area for showing your skill level before switching to the "main" development team...
EventsTeam
ATE Team
Assurance
Organisation Assurance Team
New Root & Escrow Project (NRE)
The team held several Meetings via telephone conference. Part of those meetings was to build a team to setup a software for project management.
- Build a Team for rewrite of the WebDB Software
Martin Gummi CAcert New Roots & Escrow Project Team Leader
BirdShack