Why did you choose a long text question and a json ? Why not an array question type ?
Because the adaptive MaxDiff question doesn’t have a fixed, simple answer schema.
Each respondent sees a different sequence of sets, and for each task I need to store:
- which items were shown, and
- which item was Best and which was Worst.
In LimeSurvey the response table schema is fixed at activation (afaik). An array type would require me to pre-declare some huge fixed grid of subquestions (task1_item1_best, task1_item1_worst, … up to some arbitrary max), most of which would be empty for most respondents. A single long-text field with JSON lets me store the full per-respondent history (all tasks, items and choices) without fighting the schema.
Conceptually, yes, you could abuse dual-scale arrays or multiple short texts, but for an adaptive design with variable number of tasks a single JSON blob is the least fragile option. If there’s a core type that gives a real advantage over T here, I’m open to changing the base type.
Why do you need an extra table (maybe linked to question 1, analysis system ?)
The plugin needs per-respondent algorithm state, not just the final answers. That includes utilities, exposure counts, pending task, history, etc.
I could in theory reconstruct everything from the JSON answer field on every request, or try to keep it in PHP session, but both have issues: sessions expire / get lost, and repeatedly decoding large JSON blobs for every request is not great.
A small plugin table keyed by (survey_id, qid, block_id, respondent_key) is the most robust way to keep the adaptive state: it survives session changes, is easy to inspect for diagnostics, and will also be needed if I plug in more complex algorithms (bandit style, global priors, etc.).
Can we disable saving and using the extra table ? Since data are already in response
Not in this version. In principle you could run a “no extra table” mode where the client sends the full JSON state back to the server on every step and the server just updates and returns it without persisting anything.
I haven’t implemented that because you then either:
- fully trust the client’s JSON (easy to spoof), or
- re-validate and re-compute a lot on every call anyway.
The dedicated table keeps the logic simple and verifiable. That said, a "reduced mode" that only uses the answer JSON and no plugin table is technically possible, just with weaker guarantees and more trust in the browser.
About source ; it can be great to have source from a single choice question inside the survey (I see some issue with label set)
Yeah, the label set I haven't actually tested much yet. It should, however, be a good choice since it's the standard "globally stored answer options" in LS. In my current first experiment I'm running it with inline JSON, not a label set. What problems could label sets introduce?
I’m running the first larger test (1k+ respondents) with this setup right now. After that I’ll have real data to check both the question type choice and the extra table design, and I can share results if people are interested.