$Cambridge: hermes/doc/antiforgery/FTC-questions-2005-06.txt,v 1.2 2005/06/24 17:36:56 fanf2 Exp $ These questions are in response to: http://www.ftc.gov/opa/2005/06/fyi0545.htm "A tracking number of 516761-100004 has been assigned on: 6/24/2005 1:34:55" Identify the Email Authentication Standard tested by the Company. The University of Cambridge is the developer of the MTA software Exim. Recent versions of Exim include experimental support for DomainKeys, SPF, the CSA component of CSV, and BATV. The Univeristy's central email service is running CSA checking in a live configuration. Describe any modifications you made to the specification when you tested it, and explain why the modification was made. Do you believe that the test results would be the same if you used the published specification? If not, how do you believe the results would differ? The CSA code in Exim extends the specification to deal with sending hosts that do not start the SMTP conversation with an EHLO or HELO command, or which use an IP literal as the argument to EHLO or HELO. In this situation Exim looks for CSA SRV records in the reverse DNS under in-addr.arpa or ip6.arpa. These records are otherwise treated the same as CSA SRV records in the forward DNS. This extension has similar functionality to the MTAMARK proposal. Describe what you tested (e.g., product functionality, interoperability, etc.), and the process that you used to test it. Include an analysis of how your testing process measured the capabilities being tested, and an interpretation of the results. Our testing so far has been fairly informal. Before deploying CSA on our live systems the code was subjected to thorough QA testing with integration into the Exim automated test suite. Live testing included exchanging email with other members of the CSA working group who have also deployed CSA. Describe the test environment. In particular, how many servers were used, what was the configuration of the computers and servers (size and speed of disk, type and speed of the central processing unit ("CPU"), amount of memory), etc.? Our current email relay consists of four servers with the following configuration: 2 x 3.2 GHz Intel Xeon CPUs 3GB RAM 70GB total SCSI battery-backed RAID 5 disk array What special software and/or hardware were required to conduct the testing, and how were all relevant components of the system (software and hardware) configured for the test? The only special software is an enhacement to Exim developed by me which will be included in the 4.52 release next week. How many email messages were tested? We process about 1.25 million messages per day, of which about 1 million are spam. From where did the tested emails originate? Were they live emails transmitted in real time or fabricated "test" messages. If they were live email messages, how did you treat them if they were not successfully authenticated. After my new code passed QA tests, all messages checked by CSA were live emails coming from all over the public Internet. At the moment CSA can respond in three different ways: messages sent via a positively authenticated connection are accepted; messages sent via a definitely unauthenticated connection are rejected; messages with indeterminate authentication (the vast majority) are also accepted. We make a note of the authentication status in the message header. Over what time frame did you conduct the testing? We have been running CSA in production since the start of April (i.e. for about 11 weeks). Were prior tests conducted? If so, describe the parameters of the prior testing and how the current testing differed, if at all. Also, explain whether there were any reasons for modifying the testing parameters and whether the results varied. n/a Identify what percentage of email traffic was successfully authenticated. At the moment CSA has very few users. In the last month we have rejected just over 250 messages owing to CSA checks (compared to a total of about 30 million rejections, so about one in 100,000), and a sample of stored messages shows that the positive authentication rate is about 15 per 12,000 or 1 in 1000, however this is probably an overestimation because the sample is skewed - see the next question. If possible, explain whether any of the successfully authenticated email messages were sent by spammers, phishers, or zombie drones? If so, explain how you were able to make this determination. The successful authentications are almost entirely email from CSA working group members and Plymouth University, which are all legitimate sources of email. Identify what percentage of email traffic was not successfully authenticated. Describe what part(s) of the authentication process failed. Please explain why you believe it failed. We discovered an interoperability problem caused by a weakness in the CSA specification which is being fixed by the working group. It was unclear whether sites are permitted to publish multiple CSA SRV records for a given MTA host name, and what SMTP servers are supposed to do when they encounter multiple SRV records. This problem only affected a few messages from another member of the CSA working group. Describe whether your results indicate whether the methodology or methodologies were scalable and why you believe they were scalable, and to what scale (that is, how big)? CSA does not impose significant load on MTAs, nor does it require new databases to be maintained. It may require changes to DNS administration procedures but these are likely to help other email security efforts too. I can't see any scaling problems with CSA. Describe the CPU usage and network usage observed during the test. How did this affect other uses of the system and network, if any? The CPU and network usage is negligible compared to the other anti-spam checks we are performing (such as DNS blacklist lookups, content scanning, etc.) If you manage or facilitate a tool for public testing, please describe it and summarize any results of the public testing to date. n/a Describe any costs (e.g., financial, time required, personnel required, etc.) involved in testing the email authentication standard(s), including any costs associated with making modifications to your existing system. I spent about 40 hours implementing and deploying the CSA enhancement for Exim. This kind of task is part of my normal duties so the cost is accounted for separately. If you have tested or plan to test Sender ID, did you check or will you check both Purported Responsible Address ("PRA") and "MAIL FROM?" n/a If you have not checked the PRA records and will not in the future, why not? Sender ID is incompatible with email forwarding, which is a facility used by about 7% of our 30,000 current users and by 20,000 alumni. The PRA algorithm re-defines the semantics of Resent- header fields in a way that is incompatible with RFC 2822 and RFC 822; it also requires the RFC 2822 extension which permits multiple sets of Resent- header fields, but most existing MUA and MTA implementations do not support this extension. Are you planning to test additional standards? If so, when and which ones? Yes, we are working on an implementation of BATV, which we hope to have in service before October, and we will also deploy DomainKeys/IIM when the combined DKIM specification has stabilized. Describe what steps are required before your testing program is completed. We have not yet started publishing CSA SRV records for our domains. This will be done in a couple of stages: We can publish CSA SRV records for our mail domains without significant difficulty. They are frequently used as the EHLO/HELO host name stated in fraudulent connections from viruses and spamware. CSA will replace a heuristic which we currently use to deal with this kind of junk. CSA SRV records which make assertions about subdomains are slightly more difficult to publish, because they will require an audit of email systems throughout the University, and liaison work with computing staff in other departments etc. We do not anticipate any problems with the deployment of DKIM message signing. This will only apply to secure message submission which we do not currently require; however by next summer we will require it at which point all email from our central email service will be signed. At first DKIM signature checks will only cause a message annotation, until we have a better idea of its false positive rate. Our BATV deployment will follow a similar route to DKIM, except that it will require explicit opt-in from users because we anticipate some interoperability problems caused by the use of variable return paths. At the moment we are planning to use a different signature format from the one in the BATV specification. Our system will place the signature in the domain part of the email address, which we believe will have security advantages in the long term. We intend to solicit feedback from our users as part of the evaluation of the accuracy and interoperability of DKIM and BATV.