test-suite-brainstorming: Difference between revisions

From Microformats Wiki
Jump to navigation Jump to search
(drafted with summary, problems being solved (with citations), previous work (more citations), open standards use cases)
 
(Added example end user flow)
 
(One intermediate revision by the same user not shown)
Line 19: Line 19:


* '''is''' there a canonical URI test suite, e.g. for relative URI resolution? --[[User:Barnabywalters|bw]] 18:26, 4 October 2013 (UTC)
* '''is''' there a canonical URI test suite, e.g. for relative URI resolution? --[[User:Barnabywalters|bw]] 18:26, 4 October 2013 (UTC)
** There are some examples for relative URI resolution [http://tools.ietf.org/html/rfc3986#section-5.4 in RFC3986 5.4]


Typically each function described by these standards doesn’t have complex testing requirements (put a string in, make sure the string which comes out matches the expected value).
Typically each function described by these standards doesn’t have complex testing requirements (put a string in, make sure the string which comes out matches the expected value).


If a canonical, language-agnostic test suite could be published using microformats and HTML, all implementations could easily be compared.
If a canonical, language-agnostic test suite could be published using microformats and HTML, all implementations could easily be compared.
== Example End User Flow ==
Say I’m writing a bit of code to resolve relative URLs, in a world where the spec is marked up as a test suite.
# I write a draft implementation of my function <code>resolve(uri, base)</code> based on the spec.
# I create a test case <code>testResolve(uri, base, expected)</code> which calls my function with <code>uri</code> and <code>base</code> and asserts that the output matches <code>expected</code>
# I add the spec URL as a data provider
# I run the tests
## My test framework fetches the spec and parses it for a test suite
## From the test suite markup it derives a list of tests, each consisting of a list of input values (in this case URIs and their bases) and an output value (in this case a URI)
## The test framework runs each scenario through my test case and checks whether or not my code passes
# I see which tests fail, update my code and repeat from 2


== See Also ==
== See Also ==

Latest revision as of 18:52, 4 October 2013

This article is a stub. You can help the microformats.org wiki by expanding it.

Microformats can be used to express language-agnostic test suites on the web.

Problems Being Solved

  • Having to write a new test suite for each different implementation of the same code
    • ”It’d be ideal to have some test framework which lets us test both our ruby and PHP implementations” — Aaron Parecki in IWC IRC
  • Having to keep documentation and code in sync with each other
    • “I hate reading documentation which has code errors and/or sample output that is incorrect. I [snip] wrote a quick readme parser that can lint sample source code or execute and inject the actual result” — Carbon documentation

Previous Work

The microfomats parser test suite is a real-workd project which uses the test-fixture poshformat and it’s draft microformats2 update to both generate a test suite and to test various microformats2 parsers written in different languages.

Open Standards Use Cases

Open standards like the URI specification need to be implemented in almost every language, but typically there is either no canonical test suite, or it’s difficult to find.

  • is there a canonical URI test suite, e.g. for relative URI resolution? --bw 18:26, 4 October 2013 (UTC)

Typically each function described by these standards doesn’t have complex testing requirements (put a string in, make sure the string which comes out matches the expected value).

If a canonical, language-agnostic test suite could be published using microformats and HTML, all implementations could easily be compared.

Example End User Flow

Say I’m writing a bit of code to resolve relative URLs, in a world where the spec is marked up as a test suite.

  1. I write a draft implementation of my function resolve(uri, base) based on the spec.
  2. I create a test case testResolve(uri, base, expected) which calls my function with uri and base and asserts that the output matches expected
  3. I add the spec URL as a data provider
  4. I run the tests
    1. My test framework fetches the spec and parses it for a test suite
    2. From the test suite markup it derives a list of tests, each consisting of a list of input values (in this case URIs and their bases) and an output value (in this case a URI)
    3. The test framework runs each scenario through my test case and checks whether or not my code passes
  5. I see which tests fail, update my code and repeat from 2

See Also