AdaptiveMaxDiff is a LimeSurvey 6 plugin that turns a Long free text (T) question into an adaptive MaxDiff block.
- It runs a per-respondent adaptive design in PHP.
- It shows a sequence of Best/Worst choice sets.
- It stores the full interaction in a single JSON answer field + a plugin state table.
This is an early version with a deliberately simple adaptive algorithm and minimal server-side validation.
MaxDiff (Best–Worst Scaling) is a survey method where respondents repeatedly see small sets of items (issues, policies, brands, etc.) and, in each set, choose:
- which item is Best / Most important
- which item is Worst / Least important
From these repeated choices, you can estimate relative utilities or preference scores for all items.
Key properties:
- More discriminating than simple ratings (Likert scales).
- Less cognitively heavy than full rankings for long lists.
- Often analyzed with multinomial logit, ranked logit or HB (hierarchical Bayes) models.
This plugin focuses on the survey-side presentation and adaptive design, not the final HB estimation.
- Adds an “Adaptive MaxDiff” behaviour on top of a standard
Tquestion. - For each respondent, the plugin:
- Loads the item list (from a label set or inline JSON).
- Maintains per-item utilities and counts in a plugin-owned table.
- Shows repeated MaxDiff tasks (sets of K items) via custom JS UI.
- Adapts which items appear next based on the respondent’s previous choices.
- At the end, the
Tquestion’s single answer field contains a JSON snapshot of the full per-respondent state (history, utilities, etc.).
- Works with LimeSurvey 6.
- Behaves like a question behaviour for type
T(Long free text). - Multilingual support:
- Best/Worst labels are i18n attributes (
md_best_label,md_worst_label). - Button texts use LimeSurvey’s core translations.
- Inline JSON items can be language-specific (
text_et,text_en, …). - Label-set items respect LS label set languages.
- Best/Worst labels are i18n attributes (
- Supports two item sources:
- Label set: codes = item IDs, titles = item texts.
- Inline JSON: language-specific texts per item.
- Fully per-respondent adaptive:
- Maintains utilities and exposure counts in
adaptive_maxdiff_state. - Uses a simple heuristic to choose the next set of items.
- Maintains utilities and exposure counts in
- Stashes full interaction in a single JSON answer per respondent:
- Easy to export via LS’s normal mechanisms.
- You parse the JSON in your analysis pipeline.
Place the plugin under upload/plugins/AdaptiveMaxDiff:
upload/
plugins/
AdaptiveMaxDiff/
AdaptiveMaxDiff.php
config.xml
assets/
adaptive_maxdiff.js
adaptive_maxdiff.css
- In LimeSurvey admin, go to Configuration → Plugins.
- Find AdaptiveMaxDiff.
- Click Activate.
- Optionally configure global options (none required for v0.1).
You can define items either via:
- Go to Configuration → Label sets.
- Create a label set with:
code= stable item ID (e.g.ISS01,ISS02, …).title= item text (per language).
- Note the Label set ID.
In the question attributes you will reference this ID.
Use an attribute md_items_json with a JSON array.
For multilingual surveys, the plugin expects per-language keys:
[
{ "id": "issue_01", "text_et": "Majanduskasv", "text_en": "Economic growth" },
{ "id": "issue_02", "text_et": "Sotsiaalhoolekanne", "text_en": "Social welfare" }
]id: stable item ID.text_<lang>: label for a given language (e.g.text_et,text_en).
The plugin selects the key based on the current survey language (s_lang in session).
- Add a new question of type
T(Long free text). - In Advanced settings, configure the Adaptive MaxDiff attributes:
Core settings:
| Attribute | Type | Description |
|---|---|---|
md_enable |
switch | Enable Adaptive MaxDiff on this question (Yes/No). |
md_block_id |
text | Optional ID for this block. Default: qid_<qid>. |
md_items_source |
select | label_set or inline_json. |
md_items_labelset_id |
integer | Label set ID (if using label sets). |
md_items_json |
textarea | JSON items (if using inline JSON). |
Design parameters:
| Attribute | Type | Default | Description |
|---|---|---|---|
md_items_per_task |
integer | 4 | Number of items per MaxDiff set. |
md_min_tasks |
integer | 6 | Minimum tasks required to “finish” this block. |
md_max_tasks |
integer | 10 | Maximum tasks per respondent. |
md_max_appearances |
integer | 0 | Max appearances per item (0 = no cap). |
md_reuse_cooldown |
integer | 1 | Tasks to wait before reusing an item (0 = no cooldown). |
Adaptive weights:
| Attribute | Type | Default | Description |
|---|---|---|---|
md_explore_weight |
float | 1.0 | Weight for preferring under-shown items. |
md_uncertainty_weight |
float | 1.0 | Weight for preferring low-n items. |
md_centrality_weight |
float | 0.5 | Weight for preferring mid-utility items. |
Labels:
| Attribute | Type | Default | Description |
|---|---|---|---|
md_best_label |
text | “Most important” | Best label (i18n). |
md_worst_label |
text | “Least important” | Worst label (i18n). |
md_allow_restart |
switch | Yes |
Allow restart if state is missing. |
At runtime, the plugin replaces the normal T input with a MaxDiff UI
AdaptiveMaxDiff uses two layers of storage:
-
Plugin state table (authoritative)
- Table:
{{adaptive_maxdiff_state}}(prefix by LS config). - Columns:
survey_id,qid,block_id,respondent_key,state_json,updated_at. - Holds the full state for that respondent’s block.
- Table:
-
Question answer column
- Underlying
Tquestion → one long-text column in the survey response table. - Contains the same JSON structure as
state_jsonfrom the last AJAX update.
- Underlying
Example (simplified):
{
"version": 1,
"block_id": "qid_12",
"items": [
{ "id": "issue_01", "text": "Economic growth" },
{ "id": "issue_02", "text": "Social welfare" },
...
],
"params": {
"items_per_task": 4,
"min_tasks": 6,
"max_tasks": 10,
"max_appearances": 0,
"reuse_cooldown": 1,
"explore_weight": 1.0,
"uncertainty_weight": 1.0,
"centrality_weight": 0.5
},
"tasks_answered": 8,
"finished": true,
"utilities": {
"issue_01": { "eta": 1.2, "n": 5 },
"issue_02": { "eta": -0.3, "n": 4 }
},
"shown_counts": {
"issue_01": 5,
"issue_02": 4
},
"history": [
{
"task_id": 1,
"items": [
{ "id": "issue_01", "text": "Economic growth" },
{ "id": "issue_02", "text": "Social welfare" },
{ "id": "issue_03", "text": "Healthcare" },
{ "id": "issue_04", "text": "Education" }
],
"best": "issue_01",
"worst": "issue_04"
}
// ...
],
"pending_task": null
}-
utilities[id].eta/utilities[id].nare heuristic per-respondent utility and observation counts. -
shown_counts[id]is how many times each item appeared. -
historycontains the full task-by-task log:items= list of item rows shown in that set.best/worst= IDs chosen (ornullif invalid).
You will typically:
- Export survey results as usual.
- Parse the JSON from the relevant
Tquestion column. - Flatten it to long format (
respondent × task × item) for MaxDiff modeling.
- Uses
newUnsecureRequestwithout CSRF/token checks. - Assumes respondents are not actively attacking the API.
md_min_tasksand consistency checks are only enforced client-side.beforeResponseSave()does not yet validate/clean the JSON.- Navigation - Forward-only flow assumed.
- The plugin never automatically cleans old rows from
adaptive_maxdiff_state.
- Add an initial static design (e.g. BIBD) for the first tasks to ensure coverage.
- Experiment with information-theoretic or bandit-style selection instead of the current linear score rule.
- Provide a script/admin tool to simulate responses
