XML Conformance Tests

W3CArchitecture Domain XML | Member-Confidential!

XML 1.0 (Second Edition) errata 20020320,

W3C Conformance Test Suite 20020606

This version:
Current Version:
Previous Version:
Test Archive:
W3C XML Core Working Group:
Comments:

Table of Contents

  1. Introduction
  2. Test Matrix
    1. Binary Tests
    2. Output Tests
  3. Test Case Descriptions
    1. Valid Documents
    2. Invalid Documents
    3. Not-WF Documents
    4. Optional Errors
  4. Contributors

1. Introduction

The tests described in this document provide an initial set of metrics to determine how well a particular implementation conforms to the W3C XML 1.0 (Second Edition) Recommendation. The XML Conformance Test Suite is intended to complement the W3C XML 1.0 (Second Edition) Recommendation. All interpretations of this Recommendation are subject to confirmation by the W3C XML Group .

Conformance tests can be used by developers, content creators, and users alike to increase their level of confidence in product quality. In circumstances where interoperability is necessary, these tests can also be used to determine that differing implementations support the same set of features.

The XML Test Suite was transferred from OASIS to W3C and is being augmented to reflect the current work of the W3C XML Core Working Group, including resolved issues related to the Recommendation and published Errata. This report provides supporting documentation for all the tests included in the test suite. Sources from which these tests have been collected include: ; .

2. Test Matrix

Two basic types of test are presented here. These are respectively Binary Tests and Output Tests.

2.1 Binary Tests

Binary conformance tests are documents which are grouped into one of four categories. Given a document in a given category, each kind of XML parser must treat it consistently and either accept it (a positive test) or reject it (a negative test). It is in that sense that the tests are termed "binary". The XML 1.0 (Second Edition) Recommendation talks in terms of two types of XML processor: validating ones, and nonvalidating ones. There are two differences between these types of processors:

  1. Validating processors check special productions that nonvalidating parsers don't, called validity constraints. (Both must check a basic set of productions, requiring XML documents to be well formed.)
  2. Nonvalidating processors are permitted to not include external entities, such as files with text. Accordingly, they may not report errors which would have been detected had those entities been read.

There are two types of such entity, parameter entities holding definitions which affect validation and other processing; and general entities which hold marked up text. It will be appreciated that there are then five kinds of XML processor: validating processors, and four kinds of nonvalidating processor based on the combinations of external entity which they include.

Basic XML Parsing Test Matrix
Test Document Type v. Parser Type
Nonvalidating Validating
External Entities
Ignored (3 cases)
External Entities
Read
Valid Documents accept accept accept
Invalid Documents accept accept reject
Non-WF Documents reject reject reject
WF Errors tied
to External Entity
accept
(varies)
reject reject
Documents with
Optional Errors
(not specified) (not specified) (not specified)

At this time, the XML community primarily uses parsers which are in the rightmost two columns of this table, calling them Well Formed XML Parsers (or "WF Parsers") and Validating XML Parsers. A second test matrix could be defined to address the variations in the types of of XML processor which do not read all external entities. That additional matrix is not provided here at this time.

2.2 Output Tests

The XML 1.0 (Second Edition) Recommendation places a number of requirements on XML processors, to ensure that they report information to applications as needed. Such requirements are testable. Validating processors are required to report slightly more information than nonvalidating ones, so some tests will require separate output files. Some of the information that must be reported will not be reportable without reading all the external entities in a particular test. Many of the tests for valid documents are paired with an output file as the canonical representation of the input file, to ensure that the XML processor provides the correct information.

3. Test Case Descriptions

This section of this report contains descriptions of test cases, each of which fits into the categories noted above. Each test case includes a document of one of the types in the binary test matrix above (e.g. valid or invalid documents).

In some cases, an output file , as described in Section 2.2, will also be associated with a valid document, which is used for output testing. If such a file exists, it will be noted at the end of the description of the input document.

The description for each test case is presented as a two part table. The right part describes what the test does. This description is intended to have enough detail to evaluate diagnostic messages. The left part includes:

  • An entry describing the Sections and/or Rules from the XML 1.0 (Second Edition) Recommendation which this case excercises.
  • The unique Test ID within a given Collection for this test.
  • The Collection from which this test originated. Given the Test ID and the Collection, each test can be uniquely identified.
  • Some tests may have a field identifying the kinds of external Entities a nonvalidating processor must include (parameter, general, or both) to be able to detect any errors in that test case.

3.1 Valid XML Documents

All conforming XML 1.0 Processors are required to accept valid documents, reporting no errors. In this section of this test report are found descriptions of test cases which fit into this category.

3.2 Invalid XML Documents

All conforming XML 1.0 Validating Processors are required to report recoverable errors in the case of documents which are Invalid. Such errors are violations of some validity constraint (VC).

If a validating processor does not report an error when given one of these test cases, or if the error reported is a fatal error, it is not conformant. If the error reported does not correspond to the problem listed in this test description, that could also be a conformance problem; it might instead be a faulty diagnostic.

All conforming XML 1.0 Nonvalidating Processors should accept these documents, reporting no errors.

3.3 Documents that are Not Well Formed

All conforming XML 1.0 Processors are required to report fatal errors in the case of documents which are not Well Formed. Such errors are basically of two types: (a) the document violates the XML grammar; or else (b) it violates a well formedness constraint (WFC). There is a single exception to that requirement: nonvalidating processors which do not read certain types of external entities are not required to detect (and hence report) these errors.

If a processor does not report a fatal error when given one of these test cases, it is not conformant. If the error reported does not correspond to the problem listed in this test description, that could also be a conformance problem; it might instead be a faulty diagnostic.

3.4 XML Documents with Optional Errors

Conforming XML 1.0 Processors are permitted to ignore certain errors, or to report them at user option. In this section of this test report are found descriptions of test cases which fit into this category.

Processor behavior on such test cases does not affect conformance to the XML 1.0 (Second Edition) Recommendation, except as noted.

4. Contributors (Non-normative)

A team of volunteer members have participated in the development of this work. Contributions have come from:

  • Murry Altheim, Sun Microsystems
  • Mary Brady, NIST
  • Tim Boland, NIST
  • David Brownell, Sun Microsystems
  • James Clark
  • Karin Donker, IBM
  • Irina Golfman, Inera Incorporated
  • Tony Graham, Mulberry Technologies
  • G. Ken Holman, Crane Softwrights Ltd
  • Alex Milowski, Veo Systems, Inc
  • Makota Murata, Fuji Xerox
  • Miles O'Reilly, Microstar Software, Ltd
  • Matt Timmermans, Microstar Software, Ltd
  • Richard Rivello, NIST
  • Lynne Rosenthal, NIST
  • Brian Schellar, Chrystal Software
  • Bill Smith, Sun Microsystems
  • Trevor Veary, Software AG
  • Richard Tobin, University of Edinburgh
  • Jonathan Marsh, Microsoft
  • Daniel Veillard, Imaq
  • Jonathan Marsh, Microsoft
  • Paul Grosso, Arbortext

End

Sections [Rules]:
Test ID:
Entities:
Collection:

There is an output test associated with this input file.