Suppose a “fast” participant takes 2 seconds per click.We would flag that task (even better, that exact spot in the tree) as a slowdown for that participant, because 12 seconds is substantially longer than their average of 6 seconds. But then they encounter a tough choice, and take 12 seconds to click. Suppose a “slow” participant takes 5 seconds per click as they move down the tree.To spot a slowdown, we look for tasks where that person took a long time between clicks – and by “long”, we mean longer than that same person took for other tasks. This works because the speed is measured relative to a single participant. It’s not affected by the fact that some participants naturally click through the tasks faster than others.It’s not affected by how many clicks a task requires to find the right answer.This is a better measure of speed than the “task time” described above, because: For example, for each participant, we could decide that "slowing down" means at least one click time greater than a standard deviation from that person's average click time across the entire test. The task’s speed score can then be calculated as the percentage of participants who didn’t slow down significantly during that task. If a participant falls below their usual “click pace” during a task, that's an indication that the participant took longer to understand their choices and make a decision.
![clicktime play 4 level 12 clicktime play 4 level 12](https://ichef.bbci.co.uk/images/ic/640x360/p06v1hrl.jpg)
Ideally, we want to flag moments when clicks slowed down – where a participant took longer than they usually do between clicks.
![clicktime play 4 level 12 clicktime play 4 level 12](https://usermanual.wiki/Pdf/manual.980920320-User-Guide-Page-1.png)
It was easy to set up, which allowed us to start to use it immediately - and it was customizable in the areas that we needed it to be flexible. I set our organization up with Clicktime 3 or 4 years ago and it's worked beautifully for us ever since.