Replies: 7 comments 2 replies
-
Hi,
As far as I understand your question, this means you had 95% / 5% response proportions, right? This can be problematic and result in spurious estimates, especially of drift rate and non-decision time. Response proportions should ideally be balanced out, or contain at least 10-15% of the less common response, as far as I know. Best, |
Beta Was this translation helpful? Give feedback.
-
Yes, that's right. But that should not be a problem when we keep the 2 target letters as the bounds right? Each of the letter occurs 50% of the time. 'model_summary2' has the estimates from the model using these bounds but the estimates don't seem to make sense. |
Beta Was this translation helpful? Give feedback.
-
When you make the two bounds the target letters, then you need something
like the equivalent of stimcoding in HDDM. Here the overall drift rate
across all trials of each emotion level should be 0 assuming that you have
equal trials in which the bound should hit each letter. So you would need
to have a stimulus column that tells the model which response should have
been made for that stimulus and have v modeled as a function of that. eg if
you coded stim as simply 1 and -1 then v ~ stim should give positive
coefficients for stim. This can then interact with Emotion condition if you
want.
Also you should not really model Emotion as coming from a hierarchical
distribution but rather just another term (e.g. v~ stim +stim*Emotion (you
can have this modeled hierarchically across subjects, but that is a
separate issue - that would be v ~1 + stim+ stim*Emotion
+(1+stim+stim*Emotion|partipant_id).
In principle one could assume that different emotion conditions within a
subject are themselves drawn from an overarching distribution but if you
only have 3 conditions that is not sufficient to estimate the overarching
sigma (Gelman recommends at least 5 would be needed).
This likely isn't really an "issue" as much as a "discussion" so we should
probably switch it over on github
M
…On Fri, Aug 2, 2024 at 10:39 AM Lipika-T ***@***.***> wrote:
Yes, that's right. But that should not be a problem when we keep the 2
target letters as the bounds right? Each of the letter occurs 50% of the
time. 'model_summary2' has the estimates from the model using these bounds
but the estimates don't seem to make sense.
—
Reply to this email directly, view it on GitHub
<#528 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAG7TFHFRJF77ATAOQWJEYDZPOK35AVCNFSM6AAAAABLZ4BBVWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENRVGU2TCNZZGE>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
I've converted this into a discussion and we can continue our conversation here |
Beta Was this translation helpful? Give feedback.
-
You can use the Cavanagh tutorial as an example, where there is a
categorical "stim" variable (reflecting 3 task condition conditions) - so
if you just use the same format for emotion that should work. Then the
drift rates for each emotion conditions will be listed separately in the
model output, except for one which would be the reference condition (the
intercept). (You can also change which one would get coded as the reference
if you want )
include=[
{
"name": "v",
"formula": "v ~ 1 + stim + (1+stim|participant_id)",
"prior": {
"Intercept": {"name": "Normal", "mu": 1, "sigma": 2, "initval": 1},
"stim": {"name": "Normal", "mu": 0, "sigma": 1, "initval": [0.0, 0.0]},
},
},
Michael J Frank, PhD | Edgar L. Marston Professor
Director, Carney Center for Computational Brain Science
<https://www.brown.edu/carney/ccbs>
Laboratory of Neural Computation and Cognition <https://www.lnccbrown.com/>
Brown University
website <http://ski.clps.brown.edu>
…On Fri, Aug 2, 2024 at 2:33 PM Lipika-T ***@***.***> wrote:
Hi, thank you for your suggestions. In that case, how should I code the
Emotion variable as it is a categorical variable with 3 levels? Currently I
have used the labels 1,2 and 3 to represent 3 different emotion types. And
how would I interpret the parameter estimates for the 3 conditions?
—
Reply to this email directly, view it on GitHub
<#531 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAG7TFC3P6FG5LKHCQKTQFDZPPGGDAVCNFSM6AAAAABL5ATVASVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTAMRSGY3DCOA>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
That's correct, except
1) sometimes it is better for statistical assessment to look directly at
the within subject differences e.g. v_stim:Emotion[Positive] as the
posterior distribution on that parameter tells you whether emotion
significantly altered the drift rate relative to the reference condition.
You can of course add the two to get the net drift rate in that condition
if you want, just saying that you don't necessarily need to.
2) In your case I'm not sure it makes sense to have separate terms for
Emotion on their own (since this would just ask whether that
emotion condition induces a drift toward one specific letter, rather than
to the accurate response). If this makes sense to you theoretically then
you can keep it, otherwise you can change the formula to v ~ 1 + stim +
stim:Emotion -- using : instead of * will allow it to only estimate the
interaction term.
3) My guess is that you would want to use the neutral condition as the
reference point and look at differences from positive vs negative relative
to that. It is all the same once you add them but may be easier to
interpret the within subject differences this way.
lvl = ['Neutral, 'Positive', 'Negative']
then in the model:
model = hssm.HSSM(data=data, model="ddm", include=[ {
include=[
{
"name": "v",
"formula": "v ~ 1 + stim + stim:Emotion",
"prior": {
"Intercept": {"name": "Normal", "mu": 1, "sigma": 2, "initval": 1},
"Emotion": {"name": "Normal", "mu": 0, "sigma": 1, "initval": [0.0, 0.0]},
}],
extra_namespace= {"lvl": lvl})
This will tell bambi to set the levels so that Neutral is the
reference condition.
4) it doesn't seem odd to me that t = 0.5, since the leading edge of RTs is
above that and your threshold and drifts are relatively low/high. One
should check a posterior predictive to see if the model fits capture the
data. This might also change if you take out the main emotion terms as per
above
M
…On Fri, Aug 9, 2024 at 2:55 AM Lipika-T ***@***.***> wrote:
This seems to work. I have included a stim column with values 1 and -1
indicating the target letter on a given trial. Just to clarify, 'v_stim'
here is the drift rate for the reference emotion for the target letter '1'
(and the negative value for letter '-1') and we add the values of
'v_stim:Emotion[Neutral]' and 'v_stim:Emotion[Positive]' to 'v_stim' to get
the estimates for the other 2 emotions?
include=[
{
"name": "v",
"formula": "v ~ 1 + stim + stim*Emotion",
"prior": {
"Intercept": {"name": "Normal", "mu": 1, "sigma": 2, "initval": 1},
"Emotion": {"name": "Normal", "mu": 0, "sigma": 1, "initval": [0.0, 0.0]},
}
The non decision time estimate still seems large though. I have attached
the RT distributions for the 2 target letters. Is there an issue in
estimating this parameter?
Screenshot.from.2024-08-09.12-17-13.png (view on web)
<https://github.com/user-attachments/assets/825da476-4ba5-4b01-8ca6-19e5cea8fbf0>
Screenshot.from.2024-08-09.12-18-04.png (view on web)
<https://github.com/user-attachments/assets/a3d49a03-618f-4720-be7b-73b0d5aebb4f>
Screenshot.from.2024-08-09.12-19-25.png (view on web)
<https://github.com/user-attachments/assets/dc187383-6f26-447d-9409-404ef428200e>
—
Reply to this email directly, view it on GitHub
<#531 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAG7TFGYWD7PBOAFCAVPHNDZQRRX5AVCNFSM6AAAAABL5ATVASVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTAMRYGMYDEMQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Hi, thank you for the information above it's really useful. Can I double check, does the inclusion of 'stim' work the same way if I'm interested in exploring an interaction (i.e., between prime and target_shape)? Additionally, if I am wanting to keep other variables (a and t) fixed would 'stim' still need to be included? Note, I have tried running stim:primetarget_shape rather than splittling into (stim:prime + stim:target_shape + stim:prime:target_shape) but this results in: stim:prime, target_shape and stim:prime:target_shape so I am now avoiding using ''. This is what the data file looks like in case helpful. The upper and lower boundary are dictated by the trial_type. Thank you! |
Beta Was this translation helpful? Give feedback.
-
Describe the bug
I am trying to fit a drift diffusion model to a letter search task with 2 target letters (2 choice). There's a conditioning variable - "Emotion" in the dataset with 3 levels and 48 trials per level, and the accuracy in the task is around 95%. I first used correct trials with the 2 target letters as the thresholds and the drift rates are very small (close to 0) and also often negative with large standard deviation. I then tried using all the trials with "correct" and "incorrect" as the 2 thresholds. The z value in this case is not biased towards the correct threshold. The non decision time values also seem large given that the RT values range from around 0.54 to 1.4 seconds in this participant.
HSSM version
'0.2.3'
To Reproduce
simple_ddm_model_reg_v = hssm.HSSM(
data=df,
model="ddm",
hierarchical = False,
include=[
{
"name": "v",
"formula": "v ~ 0 + (1|Emotion)",
},
{
"name": "a",
},
{
"name": "z",
},
{
"name": "t",
},
],
)
Screenshots
Example dataset and model parameters from one participant.
Beta Was this translation helpful? Give feedback.
All reactions