gift: Difference between revisions
(→Format: little consequence?) |
m (Replace <entry-title> with {{DISPLAYTITLE:}}) |
||
(52 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
__TOC__ | __TOC__ | ||
{{DISPLAYTITLE:GIFT Format}} | |||
;short URL | ;short URL | ||
:http://bit.ly/ | :http://bit.ly/gift-fmt <!-- (this page) or http://bit.ly/giftex (extension proposals) --> | ||
The [http://docs.moodle.org/en/GIFT_format GIFT picoformat] allows writing multiple-choice, true-false, short answer, matching | The [http://docs.moodle.org/en/GIFT_format GIFT picoformat] allows writing multiple-choice, true-false, fill-in-the-blank (also called short answer or missing word), matching, and numerical quiz questions in a simple text format. The GPLed [http://moodle.org/ Moodle] Course/Learning Management System can import and export questions in the GIFT format. The extensions described below should enable easily authorable self-study, learner-adaptive, low-stakes quiz extensions for Wikiversity and other projects. | ||
=== Syntax === | |||
=== | |||
GIFT quiz questions must be encoded in utf-8 characters, and are delimited by blank lines. A question may be preceded by a title inside a pair of double colons and must include or be followed by an answer specification in curly braces. Examples are shown below. | GIFT quiz questions must be encoded in utf-8 characters, and are delimited by blank lines. A question may be preceded by a title inside a pair of double colons and must include or be followed by an answer specification in curly braces. Examples are shown below. | ||
Line 101: | Line 99: | ||
=== Case insensitive === | === Case insensitive === | ||
Alphabetic case-sensitive comparison is off by default, but may be enabled for short answer (fill-in-the-blank) questions that need them with: | Alphabetic case-sensitive comparison is off by default, but may be enabled for short answer (fill-in-the-blank) questions that need them (usually a very unlikely occurrence, except, e.g., in some chemical formulae like proteins) with: | ||
<code> | <code> | ||
Line 107: | Line 105: | ||
</code> | </code> | ||
=== Extensions | === Extensions === | ||
Please help sponsor these extensions at http://talknicer.com | |||
==== Learner adaptation ==== | |||
Still to be done to extend the format for [http://en.wikipedia.org/wiki/Computerized_adaptive_testing computerized adaptive testing] (per, for example, del Soldato, T. & du Boulay, B. (1995) "Implementation of Motivational Tactics in Tutoring Systems," ''Journal of Artificial Intelligence in Education,'' '''6'''(4): 337-78) is to add optional prerequisite and relative difficulty links to other questions (titles can be used) and optional question help text at two levels of specificity: | |||
{| | |||
! Symbols !! Use | |||
|- | |||
| // helps-answer: title[, title...] || Set of questions which help answer this question | |||
|- | |||
| // helped-by: title[,...] || Set of questions which answering this question helps answer | |||
|- | |||
| // easier-than: title[%n%][,...] <br /> // harder-than: title[%n%][,...] || Sets of relations to other questions by relative difficulty | |||
|- | |||
| // general-help: text || Optional general help | |||
|- | |||
| // specific-help: text || Optional specific help | |||
|- | |||
| \// || Two forward slashes, not a comment | |||
|- | |||
| // review-state: text || Summary of accuracy review status (see below) | |||
|} | |||
Can some of the directed graph of which questions assist in the answering of other questions be derived from categorization or must it be stored completely explicitly? | |||
==== Accuracy review ==== | |||
It would also help to be able to specify the state of the question in a review system: | It would also help to be able to specify the state of the question in a [http://strategy.wikimedia.org/wiki/Proposal:Develop_systems_for_accuracy_review review system]: | ||
* incomplete | * incomplete | ||
Line 138: | Line 152: | ||
[http://strategy.wikimedia.org/wiki/Proposal_talk:Assessment_content (source)] | [http://strategy.wikimedia.org/wiki/Proposal_talk:Assessment_content (source)] | ||
==== | ==== Examples ==== | ||
Comments with special hyphenated keywords with colons can be used preceding or following the questions as long as blank lines don't intervene. For example: | |||
// helps-answer: latte-color // helps if you know this to answer that | |||
::milk-color // title; not sure if comment ok here | ::milk-color // title; not sure if comment ok here | ||
// easier-than: sky-color, // this question is easier than those two | |||
// shroedinger-eqn%1% // numeric quantity for relative difficulty | |||
:: What color is milk? { // question | :: What color is milk? { // question | ||
=white // answer | =white // answer | ||
} | } | ||
// general-help: Think about full milk bottles. // general help | |||
// specific-help: It's the same color as chalk. // specific help | |||
// review-state: proposed // summary of review status | |||
That would apparently cover the stated extension requirements. The reflexive links (helps/helped, easier/harder) can be implicit where they aren't specified. Not sure whether it's a good idea to specify [http://strategy.wikimedia.org/wiki/Proposal_talk:Assessment_content#assessment_item_fields summary statistics from question fields] as they might exist in a database. Maybe that should be specified but discouraged in practice, because some of the metadata grows every time a question is answered. | |||
==== Include ==== | |||
Some way to include GIFT files at other locations would be nice. Perhaps: <tt>// include:</tt> (filename|url|wikipage) | |||
=== Notes for further work === | |||
The score-based computer adaptive testing process already implemented in Moodle may be much simpler and easier to re-implement than del Soldato and du Boulay's 1995 [[gift-syntax/dsd-tutor-rules|rules]] -- which require assessment of confidence (self-reported), effort (logging), and independence (frequency of help requests) instead of just scoring questions -- and [[gift-syntax/dsd-tutor-schema|this schema]] based on those rules and the ''Journal of Artificial Intelligence in Education'' article they appeared in. But the primary goals listed here are likely satisfied with the detail shown above. | |||
The [http://en.wikipedia.org/wiki/TUTOR_%28programming_language%29#Answer_judging "answer judging" of the 1970s-era PLATO TUTOR language] is appropriate for fill-in-the-blank pattern matching. (See also Tenczar, P.J. and Golden, W.M. (1972) [http://www.eric.ed.gov/PDFS/ED124944.pdf "Spelling, Word, and Concept Recognition",] CERL Report X-35 (Urbana, Illinois: Computer-based Education Research Lab, U of IL) ''Plato Publications''.) | |||
There is more we may want to accomplish noted in the [http://en.wikipedia.org/wiki/Spacing_effect "Spacing effect"] Wikipedia article and [http://en.wikipedia.org/wiki/Bloom%27s_Taxonomy#Cognitive "Bloom's cognitive taxonomy,"] suggesting sub-categories. | |||
<!-- ; Please disregard posting character counts here | |||
I asked for a count of the characters @, !, $, *, ;, and _ in {answer specifications} at http://moodle.org/mod/forum/discuss.php?d=143213 -- if you are responding, please never mind. Thanks anyway! [[User:JSalsman|JSalsman]] 22:27, 10 February 2010 (UTC) | |||
--> | |||
===See also=== | |||
*http://www.mediawiki.org/wiki/Extension:Quiz | |||
**[http://www.mediawiki.org/wiki/Extension_talk:Quiz#GIFT_format_and_Quiz_tables.3F Mediawiki Quiz extension talk: GIFT format and Quiz tables?] | |||
**[https://bugzilla.wikimedia.org/show_bug.cgi?id=22475 Mediawiki bug (enhancement request) 22475] | |||
*[http://strategy.wikimedia.org/wiki/Proposal:Assessment_content Wikimedia assessment content proposal] | |||
**http://en.wikiversity.org/wiki/Help:Quiz | |||
**[http://en.wikiversity.org/wiki/Help:Quiz/Wikiversity_compared_to_Moodle Wikiversity compared to Moodle] | |||
**http://en.wikiversity.org/wiki/Category:Quizzes | |||
*[http://buypct.com/gift_reference.pdf 5-page reference] (PDF) | *[http://buypct.com/gift_reference.pdf 5-page reference] (PDF) | ||
*[[picoformats]] | *[[picoformats]] |
Latest revision as of 16:22, 18 July 2020
- short URL
- http://bit.ly/gift-fmt
The GIFT picoformat allows writing multiple-choice, true-false, fill-in-the-blank (also called short answer or missing word), matching, and numerical quiz questions in a simple text format. The GPLed Moodle Course/Learning Management System can import and export questions in the GIFT format. The extensions described below should enable easily authorable self-study, learner-adaptive, low-stakes quiz extensions for Wikiversity and other projects.
Syntax
GIFT quiz questions must be encoded in utf-8 characters, and are delimited by blank lines. A question may be preceded by a title inside a pair of double colons and must include or be followed by an answer specification in curly braces. Examples are shown below.
Symbols | Use |
---|---|
// text | Comment until end of line (optional) |
::title:: | Question title (optional) |
text | Question text (becomes title if no title specified) |
{ | Start answer(s) -- without any answers, text is a description of following questions |
{T} or {F} | True or False answer; also {TRUE} and {FALSE} |
{ ... =right ... } | Correct answer for multiple choice, (multiple answer?) or fill-in-the-blank |
{ ... ~wrong ... } | Incorrect answer for multiple choice or multiple answer |
{ ... =item -> match ... } | Answer for matching questions |
#feedback text | Answer feedback for preceding multiple, fill-in-the-blank, or numeric answers |
{# | Start numeric answer(s) |
answer:tolerance | Numeric answer accepted within ± tolerance range |
low..high | Lower and upper range values of accepted numeric answer |
=%n%answer:tolerance | n percent credit for one of multiple numeric ranges within tolerance from answer |
} | End answer(s); additional text may follow for fill-in-the-blank |
\character | Backslash escapes the special meaning of ~, =, #, {, }, and : |
\n | Places a newline in question text -- blank lines delimit questions |
Examples
// true-false ::Q1:: 1+1=2 {T} // not sure if comments are okay here // multiple choice with specific feedback ::Q2:: What's between orange and green in the spectrum? {=yellow # correct! ~red # wrong, it's yellow ~blue # wrong, it's yellow} // fill-in-the-blank ::Q3:: Two plus {=two =2} equals four. // matching ::Q4:: Which animal eats which food? { =cat -> cat food =dog -> dog food } // math range question -- note: {#1..5} is the same range ::Q5:: What is a number from 1 to 5? {#3:2} // multiple numeric answers with partial credit and feedback ::Q7:: When was Ulysses S. Grant born? {# =1822:0 # Correct! You get full credit. =%50%1822:2 # He was born in 1822. You get half credit for being close. } // essay ::Q8:: How are you? {} // alternate layout ::Title :: Question { =Correct answer 1 =Correct answer 2 ~Wrong answer 1 #Response to wrong answer 1 ~Wrong answer 2 #Response to wrong answer 2 }
- Note: the table and examples above were adapted from and then migrated back to the GPL-licensed Moodle site, not copied from there.
Categories
Question categories may be specified by preceding them with
$CATEGORY: path/name
with a blank line before and after. That sets the following questions' category name or pathname.
Case insensitive
Alphabetic case-sensitive comparison is off by default, but may be enabled for short answer (fill-in-the-blank) questions that need them (usually a very unlikely occurrence, except, e.g., in some chemical formulae like proteins) with:
$question->usecase = 1;
Extensions
Please help sponsor these extensions at http://talknicer.com
Learner adaptation
Still to be done to extend the format for computerized adaptive testing (per, for example, del Soldato, T. & du Boulay, B. (1995) "Implementation of Motivational Tactics in Tutoring Systems," Journal of Artificial Intelligence in Education, 6(4): 337-78) is to add optional prerequisite and relative difficulty links to other questions (titles can be used) and optional question help text at two levels of specificity:
Symbols | Use |
---|---|
// helps-answer: title[, title...] | Set of questions which help answer this question |
// helped-by: title[,...] | Set of questions which answering this question helps answer |
// easier-than: title[%n%][,...] // harder-than: title[%n%][,...] |
Sets of relations to other questions by relative difficulty |
// general-help: text | Optional general help |
// specific-help: text | Optional specific help |
\// | Two forward slashes, not a comment |
// review-state: text | Summary of accuracy review status (see below) |
Can some of the directed graph of which questions assist in the answering of other questions be derived from categorization or must it be stored completely explicitly?
Accuracy review
It would also help to be able to specify the state of the question in a review system:
- incomplete
- (ungrammatical, ambiguous, non-sequitur, implies false assumption, circular, dependent on future circumstance or decision, etc.)
- open
- hypothetical (also open -- but less so?)
- answered ("proposed"?)
- reviewed
- complete (passed review)
- asked
- scored
- challenged
- assessed
- challenged
- scored
- asked
- rejected (failed review or assessment)
- complete (passed review)
- reviewed
Examples
Comments with special hyphenated keywords with colons can be used preceding or following the questions as long as blank lines don't intervene. For example:
// helps-answer: latte-color // helps if you know this to answer that ::milk-color // title; not sure if comment ok here // easier-than: sky-color, // this question is easier than those two // shroedinger-eqn%1% // numeric quantity for relative difficulty :: What color is milk? { // question =white // answer } // general-help: Think about full milk bottles. // general help // specific-help: It's the same color as chalk. // specific help // review-state: proposed // summary of review status
That would apparently cover the stated extension requirements. The reflexive links (helps/helped, easier/harder) can be implicit where they aren't specified. Not sure whether it's a good idea to specify summary statistics from question fields as they might exist in a database. Maybe that should be specified but discouraged in practice, because some of the metadata grows every time a question is answered.
Include
Some way to include GIFT files at other locations would be nice. Perhaps: // include: (filename|url|wikipage)
Notes for further work
The score-based computer adaptive testing process already implemented in Moodle may be much simpler and easier to re-implement than del Soldato and du Boulay's 1995 rules -- which require assessment of confidence (self-reported), effort (logging), and independence (frequency of help requests) instead of just scoring questions -- and this schema based on those rules and the Journal of Artificial Intelligence in Education article they appeared in. But the primary goals listed here are likely satisfied with the detail shown above.
The "answer judging" of the 1970s-era PLATO TUTOR language is appropriate for fill-in-the-blank pattern matching. (See also Tenczar, P.J. and Golden, W.M. (1972) "Spelling, Word, and Concept Recognition", CERL Report X-35 (Urbana, Illinois: Computer-based Education Research Lab, U of IL) Plato Publications.)
There is more we may want to accomplish noted in the "Spacing effect" Wikipedia article and "Bloom's cognitive taxonomy," suggesting sub-categories.
See also
- 5-page reference (PDF)