An informal analysis of vendor acknowledgement of vulnerabilities

Many disclosure debates focus on researchers who discover vulnerabilities. Little attention is given to the impact on busy security analysts who must determine which vulnerabilities exist, and if they can be patched. There is little or no emphasis on the role of vendors of the vulnerable software. Given continued discussions of vulnerability disclosure practices, most recently regarding vendor contacts on the PEN-TEST list, we decided to offer some results of an informal analysis we performed in October 2000. We also make some recommendations for improvements.

In anticipation of the Guardent/eWeek vulnerability summit in early November 2000, we conducted an informal experiment in which we tried to obtain the following information:

1) Whether the vendor publicly showed awareness of an announced vulnerability, and admitted that the problem was real (“vendor acknowledgement,” also referred to as confirmation).

2) Whether the vendor provided contact information for security problems.

Although our results are not quantitative, we do consider them to be useful in understanding some of the problems in assessing which vulnerabilities can be patched.


This is an informal analysis. It is the sole product of the

individual authors, and does not represent an official position of The MITRE Corporation.

This analysis does not attempt to identify or address the various reasons for why vulnerability information may be incomplete or unavailable, as that topic has often been discussed in the past.

The analysis evolved as it was being conducted, and as such was not subjected to normal scientific rigor. Thus, it is not appropriate to list specific vulnerabilities, vendors, or researchers.

This analysis focuses on a test set of “worst case” vulnerabilities. It is not necessarily typical of all vulnerabilities or all vendors.

Basic Conclusions

1) Without a standard location for security vulnerability information on vendor web sites, it is often extremely difficult to even find the appropriate page where vulnerability and patch information might be discussed.

2) Without a standard contact name for asking questions about vulnerabilities, it is difficult for a security analyst to find out which individuals or groups at a vendor web site have the information needed. This problem also makes it difficult for the vulnerability researcher to reach the right people.

3) The customer-focused nature of vendor web sites often makes it difficult for security analysts to access vulnerability information, even if the site has that information. Security analysts normally are not the vendor’s customers.

4) Frequently, there is no apparent public acknowledgement of the vulnerability, or the web site is too difficult to navigate.

5) In some cases, the vendor may or may not have acknowledged the vulnerability, but the vendor’s information is too vague to be certain.

6) When the researcher’s original report did not include detailed vendor and product information (such as URLs and version numbers), it made it more difficult for the security analyst to find the proper vendor web page to examine for acknowledgement.

Focus of the Analysis

We examined a test set of approximately 150 announced vulnerabilities. This test set EXCLUDED any vulnerability with a well-established advisory, e.g. from CERT, well-known software vendors, or well-known security companies with vulnerability analysis teams. Most were announced between 1997 and 2000. The most recent vulnerabilities had been publicized at least 2 months before the analysis took place.

Vendor acknowledgement of vulnerabilities was examined from the point of view of a “busy security analyst.” Assumptions were:

1) The analyst is responsible for identifying and mitigating IT security vulnerabilities in a relatively large enterprise with diverse platforms, applications, and security requirements.Thus, every announced vulnerability may need to be addressed in one way or another.

2) The analyst is busy, and so only has about 20 minutes to research

each vulnerability. At a rate of approximately 100 new vulnerabilities per month – which is not unusual these days – that could consume almost one staff week per month, just seeing if vulnerabilities are “real” according to the vendor.

3) The analyst does not necessarily know if the researcher’s report is accurate, e.g. whether the vendor was properly contacted.

4) The analyst only wants to concentrate on vulnerabilities for which there is a known fix or workaround that is approved by the vendor.

The basic task was to see if the analyst could find sufficient vendor

acknowledgement of a vulnerability within a maximum of 20 minutes.

Process for Determining Acknowledgement

The six steps below comprise the basic method we used. Although

carried out in a disciplined way, there were variations in each

author’s approach. As we continued the work, the process itself


1) Only look for the vendor’s direct public acknowledgement. We did

not consider a researcher’s claim that “the vendor has fixed the

problem” to be sufficient support for vendor acknowledgement; a security analyst may not know how reliable the researcher is. A

link to a patch was also considered weak evidence. A security analyst might not be a consumer of the software, so he or she might

not be able to verify that a suggested patch actually fixes the problem, assuming there is enough time to test the patch.

2) Consult the original disclosure (typically a *Bugtraq post) to see if the researcher provided any supporting information (vendor inks, patches, etc.) See if there are any follow-ups by the affected vendor.

3) Consult the vendor web site. Look for web sections named “security,” “support,” “changes,” product-specific pages, etc. When information can not be obtained in this fashion, try a keyword search, including keywords such as “security,” the type of vulnerability (e.g. “buffer overflow”), software name, and version number.

4) Consult supporting vulnerability databases for additional pointers if information can not be found on the vendor web site. Two well-known online vulnerability databases were used, to see if they had conclusive fix information that satisfied the requirements in (1) above, or pointers to vendor web sites.

5) If acknowledgement can not be determined, try to obtain a contact name from the vendor web site. (The “less busy” security analyst might want to ask the vendors for acknowledgement.)

6) Record the discovered information, including URL’s visited.

The information we collected did not lend itself easily to quantitative results. Therefore, we report our results as observations based on the information recorded.

High-Level Observations

The following high-level observations were made. Although many

vendors faithfully report vulnerabilities, the focus of our work was

on finding vendor confirmation within the 20 minute time frame of the

“busy security analyst.” Therefore, these observations are not

necessarily indicative of actual vendor reporting levels. The results

reflect more the ease of finding the information, especially for

vendors that do not have a well-established security advisory capability.

1) Approximately 15% of the vulnerabilities in the test set had clear

vendor acknowledgement.

2) Approximately 15% of the vulnerabilities in the test set had possible vendor acknowledgement. This included vague vendor

statements such as “it’s really important for customers to update

their software,” or a “fixed a security bug” statement in a change

log. It was not necessarily clear whether the problem being fixed

was the same as the problem that was disclosed by the researcher.

3) For approximately 20% of the vulnerabilities, it was too difficult to find a page on the vendor web site that might include vulnerability information.

4) Unless a software vendor posted a follow-up to an email, it would

normally take between 15 and 30 minutes for the security analyst to

determine if there was any acknowledgement.

5) For approximately 1/3 of the vulnerabilities, the researcher said

that they had contacted the vendor. For at least 1/3 of the vulnerabilities, the researcher did not say anything about contacting the vendor. Data is unavailable for the remainder of the vulnerabilities, but either (a) the researcher explicitly said that they did *not* contact the vendor, or (b) the researcher did not say anything about contacting the vendor.

6) Sometimes, the researcher provided little vendor information. This increased our workload by forcing us to search for information via common search engines. For example, if the researcher did not provide a URL to the vendor site, then we would would have to search the WWW for the vendor and product. The search would be more difficult when the researcher didn’t spell the software or vendor names correctly. If the researcher didn’t provide a product link, then we would have to search the vendor site for product pages. If the researcher didn’t provide the version number of the affected software, then we would often have to do some additional searching; or, we wouldn’t be able to know if a vendor-reported fix was addressing the vulnerability found by the researcher.

7) Most vendor web sites were focused exclusively on the customer, instead of the security analyst.This could make it difficult or impossible for the analyst to obtain the appropriate security


Often, if a “security” web page was available on a vendor web site,

the page would point to security features of the product – not security announcements. (This was also encountered when performing keyword searches for “security”.) Often, the “support” web page did not include pointers to security contacts, or security advisories. Sometimes, the “support” web page was inaccessible because we were not a customer, or the web site required registration. A busy security analyst would not necessarily want to register with every vendor whose software is used in the analyst’s enterprise.

8) Some vendor web sites did not provide any email address or phone

number for support, instead relying on a web-based form. The form

would often require information such as the version and platform of

the product being used. This type of interface would be highly

inconvenient for a security analyst to ask the vendor for acknowledgement, if it is not already available on the web site.

9) Some freeware/open source vendors would be very clear in their

acknowledgement of vulnerabilities, whether in a “recent news” page

or in a change log.

If the freeware vendor was “small,” then normally the web site would be simple, and contact information would be easily accessible.

10) Sometimes, a change report would include a brief entry that was

too vague to be certain whether the vendor fixed the vulnerability

that had been reported by the researcher. If the researcher left out critical information such as the version number, this would be more difficult to determine if the vulnerability was fixed.

Sometimes, the vendor would describe a “serious bug” but not say

that it was a security problem, so it could not be certain that the fix had security implications – or, which vulnerability was being fixed.

Sometimes, we were only able to determine that the vulnerability had been fixed by manual source code inspection.Often, however, there were no surrounding comments to indicate for certain that the source code had been patched.

Sometimes, several vulnerabilities in the same product and version would be disclosed, but we would only see acknowledgement of a single vulnerability. It’s possible that the vendor fixed all the problems in the same patch, but it could not be certain.

11) Often, when the vendor included cross references, this made it much easier to ensure that the proper vulnerability was being addressed. This was especially useful in cases in which multiple

vulnerabilities were discovered in the same software product.

12) Many large vendor web sites were difficult to navigate, especially

if the vendor site offered information on a large number of products. Just finding the product page could take a long time. We often found ourselves relying on the site’s search engine, where a keyword search for “security” would rarely produce useful results from an analyst’s perspective.

While a time limit for the analysis was set for each vulnerability, we sometimes found ourselves “lost” in these sites, following many false leads, to the point where we would lose track of time and exceed the 20 minute maximum. Finding the appropriate location in a web site was often frustrating, and it was often tempting to quickly give up.

13) Sometimes, on very large web sites, we visited more than 15

different URL’s before giving up, especially if we had to rely on knowledge base or site-wide keyword searches.

14) If the vendor web site did not have any apparent acknowledgement, sometimes the vulnerability databases that we consulted did not have any additional fix or vendor information. This was probably a reflection of the lack of information on the vendor web sites. Sometimes, however, the vulnerability databases had pointers to vendor web site pages that we had not been able to find on our own before our time limit expired. This is perhaps a reflection of the complexity of the vendor web site.

15) We rarely encountered a case in which a vendor publicly acknowledged a vulnerability before a fix was available. This made it difficult to determine if the vendor was even aware of the problem. (Note that some vendors have said that many of their customers prefer this approach).

16) Even in cases in which a vendor had a well-established security

advisory and contact mechanism in place, if a reported security

vulnerability did not have an associated advisory, it would be

difficult to determine if the vendor was even aware of the report. This was often due to the same site navigation problems that we encountered with other vendors that did not have advisory capabilities.

17) Some vendor web sites relied on bulletin boards for technical support. These boards were often difficult to navigate or search, but sometimes they were the only potential location in which acknowledgement might have appeared.

18) For vulnerabilities that were more than a year old, it was much

more difficult to obtain acknowledgement, due to factors such as:

(a) web sites were deleted or relocated, (b) the software was discontinued or no longer supported, or (c) the software was otherwise renamed, e.g. as part of a corporate acquisition.

19) Sometimes, the vendor would post an acknowledgement to an internal customer list, and that post would be forwarded to a vulnerability disclosure source such as Bugtraq. However, there would not be any apparent public acknowledgement of the vulnerability.

20) Often, the researcher would not include vendor contact information, or a record of their communications with the vendor,

which could help a security analyst to determine if the vendor is

aware of the problem and developing a patch. (The practice of including “communications logs” seems to be growing more prevalent since our analysis in October 2000.)

21) Many vulnerabilities in the test set had a high risk level, e.g. a

buffer overflow that allowed a remote attacker to execute arbitrary

code. Unfortunately, the most serious vulnerabilities in our test

set did not necessarily produce an increase in the visibility of the vendor’s acknowledgement. In some cases, this might be due to an incorrect assessment of the actual risk level. For example, buffer overflows are often described as having only a denial of service impact. It is likely that many of those overflows could be forced to execute code by a sufficiently skilled programmer, and it is possible that the affected vendor did not realize the potential seriousness of the problem.

Some Suggestions for Improvement

Following are some suggestions for improvement. These suggestions are intended to make it easier for security analysts to obtain information from vendors. They build on suggestions that others have made in the past. Note that vendors may have specific reasons for not providing easy access to vulnerability information.

1) A vendor could make its security acknowledgements more easily

accessible by placing them in a standard section of the web site,

either (a) a security page, or (b) the support page. If the vendor

offers multiple products, then it could provide a link to a security page from each product page.

2) If a vendor restricts access to some support pages, or requires registration, then it could provide their acknowledgement in a place that has full access to the public, which would make it much more convenient for security analysts. Note: this is only a recommendation for the acknowledgement of the problem; we recognize that the vendor may wish to restrict access to customer-specific resources, such as patches.

3) A vendor could feature a security contact more prominently on its web site. The contact could be listed under the security, support, and product pages. The vendor could provide a standardized alias for reporting security problems or asking for clarification, such as RFPolicy 2.0 [1] suggests additional aliases.

4) A vendor could annotate vulnerability-related web pages with standard keywords such as “security” and “vulnerability” to

facilitate web searches.

5) If a vendor uses a web-based form for support, then it could

provide options that allow non-customers to make requests or

comments about security without requiring product license

information, platform information, or other information that non-customers would not have.

6) A vendor could precisely identify which vulnerabilities are being

fixed by using commonly accepted cross references [2].

7) If a vendor offers products that have security features, then the

vendor could include a pointer to their security page.

8) A researcher could make it easier for security analysts by

including the following information in the vulnerability report: (a) URL to vendor web site; (b) URL to product page; (c) URL to vendor acknowledgement, if available; (d) product version and platform information; and (e) record of communications with the vendor, especially who was contacted and how.

9) Finally, a Request For Comments (RFC) could be drafted and adopted so that there is established guidance for all participants in the vulnerability disclosure process, especially the vendor and the researcher.


[1] RFPolicy 2.0. While this policy is focused on how researchers should participate in the disclosure process, it assumes certain lines of communication with vendors that often do not exist, such as the standard security email aliases.

[2] It should be noted that both authors are involved in a common

cross-referencing effort.

Don't miss