Thursday, July 6, 2017

Chapter 3 3 Selective Influence

Chapter 3 3 Selective Influence


Chapter 3.3 Selective Influence

This is the note for Rounder's book 3.3

Section 3 demonstrated a very interesting concept, the selective influence. A few notes for the background:

  1. Parameter d, the probability of detecting a signal, theoretically relies only on the signal intersity for example loudness or quality of the sound. Thus, d should be a bottom-up parameter.
  2. On the contrary, parameter g should be a top-down parameter, because one guess if the signal is present. Although the conceptualisation can be very complicated such as the assertation of attentional capture, which can be interpreted as a bottom-up driven top-down parameter, the complicated issue should be left for a later exploration.

An Hypothetical Example

Two different incentive schemes were given to the observers in a signal-detection experiment. First scheme gives 10� for hit and 1� for correct rejection. The second scheme applied the reverse. So, by simply reasoning, the first scheme observers should have a higher tendency to guess signal appearance and the second scheme observers should be more conservative to say a signal is present. Translated into the data term, the hit and false alarm number should be higher in the scheme one and the miss and correction rejection should be higher in the scheme two. The fictitious data set seems to simulate the idea.

StimulusHitMissFalse AlarmCorrect Rejection
Condition 140103020
Condition 21535248

Two questions can be answered by the experiment.

  1. Does the incentive manipulation affect the top-down, guessing parameter (i.e., g)?
  2. Does the manipulation exert no effect on the bottom-up, signal detection parameter (i.e., d)?

Testing Models

Model 1 assumes the data observed in the condition 1 and 2 are generated by two separate models. The ( i ) indexes 1 and 2, denoting incentive conditions .

Model 1 [ egin{array}{l l} Y_{h,i} sim B(d_i+(1-d_i)g_i, N_{s,i}), Y_{f,i} sim B(g_i, N_{n,i}) end{array} ]

Model 2 assumes that detection parameters are the same across the conditions.

Model 2 [ egin{array}{l l} Y_{h,i} sim B(d+(1-d)g_i, N_{s,i}), Y_{f,i} sim B(g_i, N_{n,i}) end{array} ]

Model 3 assumes that guessing probabilities are the same across the conditions.

Model 3 [ egin{array}{l l} Y_{h,i} sim B(d_i+(1-d_i)g, N_{s,i}), Y_{f,i} sim B(g, N_{n,i}) end{array} ]

Numerical Method

y is the data vector, ( y_h, y_m, y_f, y_c ) and par is the parameter vector.

# General model
nll.condition <- function(par, y) {
d <- par[1] # the detection probability
g <- par[2] # the guessing probability
p <- 1:4
p[1] <- d + (1 - d) * g # p_h
p[2] <- 1 - p[1] # p_m
p[3] <- g # p_f
p[4] <- 1 - p[3] # p_c
return(-sum(y * log(p))) # return negative ll
}

# Model 1 par4=d1,g1,d2,g2 y8=(h1, m1, f1, c1, h2, m2, f,2 c2)
nll.m1 <- function(par4, y8) {
nll.condition(par4[1:2], y8[1:4]) + nll.condition(par4[3:4], y8[5:8])
}

# Model 2 par3=d,g1,g2 y8=(h1, m1, f1, c1, h2, m2, f,2 c2)
nll.m2 <- function(par3, y8) {
nll.condition(par3[1:2], y8[1:4]) + nll.condition(par3[c(1, 3)], y8[5:8])
}

# Model 3 par3=d1,d2, g y8=(h1, m1, f1, c1, h2, m2, f,2 c2)
nll.m3 <- function(par3, y8) {
nll.condition(par3[c(1, 3)], y8[1:4]) + nll.condition(par3[2:3], y8[5:8])
}

The codes were thus simplifid as such:

# condition1 (h, m, f, c), condition2 (h, m, f, c)
dat <- c(40, 10, 30, 20, 15, 35, 2, 48)

# Model 1
par <- c(0.5, 0.5, 0.5, 0.5) # initial values
mod1 <- optim(par, nll.m1, y8 = dat, hessian = TRUE)
## Warning: NaNs produced
## Warning: NaNs produced
## Warning: NaNs produced
## Warning: NaNs produced
## Warning: NaNs produced
## Warning: NaNs produced
## Warning: NaNs produced
## Warning: NaNs produced
## Warning: NaNs produced
## Warning: NaNs produced

# Model 2
par <- c(0.5, 0.5, 0.5) # initial values
mod2 <- optim(par, nll.m2, y8 = dat, hessian = TRUE)
## Warning: NaNs produced
## Warning: NaNs produced
## Warning: NaNs produced
## Warning: NaNs produced
## Warning: NaNs produced

# Model 3
par <- c(0.5, 0.5, 0.5) # initial values
mod3 <- optim(par, nll.m3, y8 = dat, hessian = TRUE)
# solve inverse the square matrix mod1$hessian
mod1SE <- sqrt(diag(solve(mod1$hessian)))
mod2SE <- sqrt(diag(solve(mod2$hessian)))
mod3SE <- sqrt(diag(solve(mod3$hessian)))
errbar <- function(x, y, height, width, lty = 1) {
arrows(x, y, x, y + height, angle = 90, length = width, lty = lty)
arrows(x, y, x, y - height, angle = 90, length = width, lty = lty)
}

parameters <- mod1$par[c(1, 3, 2, 4)]
commonD <- mod2$par[1]
commonG <- mod3$par[3]
xpos <- barplot(parameters, col = c("#56B4E9", "#009E73"), legend.text = c("Condition 1",
"Condition 2"), args.legend = list(x = "topright"), ylim = c(0, 1), ylab = "Parameter Estimates")
cdX <- (xpos[1] + xpos[2])/2
cgX <- (xpos[3] + xpos[4])/2

axis(1, at = c(cdX, cgX), labels = c("Detection (d)", "Guessing (g)"), las = 1,
tick = FALSE)
errbar(xpos, parameters, mod1SE, 0.3)

points(cdX, commonD)
points(cgX, commonG)
errbar(cdX, commonD, mod2SE[1], 0.3)
errbar(cgX, commonG, mod3SE[3], 0.3)

plot of chunk unnamed-chunk-3

# G^2 Model 3 assumed only one g, so g1=g2. The value of G^2 compares with
# ?^2(1) = 3.84. Reject model 3, so an identical guessing parameter is
# not supported.
2 * (mod3$value - mod1$value)
## [1] 41.28

# Model 2 assumed only one d, so d1=d2. The value of G^2 compares with
# ?^2(1) = 3.84. Retain model 2, so an identical detection parameter is
# tenable.
2 * (mod2$value - mod1$value)
## [1] 1.257
# Model 1 par4=d1,g1,d2,g2
parameters1 <- mod1$par[c(1, 3, 2, 4)]
dhat1 <- parameters1[1:2]
ghat1 <- parameters1[3:4]
hitM1 <- dhat1 + (1 - dhat1) * ghat1
FAM1 <- ghat1

# Model 2 par3=d,g1,g2
parameters2 <- mod2$par
dhat2 <- parameters2[1]
ghat2 <- parameters2[2:3]
hitM2 <- dhat2 + (1 - dhat2) * ghat2
FAM2 <- ghat2

tab3.3 <- data.frame(hit.rate = c(hitM1, hitM2), false.alarm.rate = c(FAM1,
FAM2))
rownames(tab3.3) <- c("Model 1 Predict C1 ", "Model 2 Predict C2", "Model 1 Predict C1",
"Model 4 Predict C2")
tab3.3
## hit.rate false.alarm.rate
## Model 1 Predict C1 0.8000 0.59993
## Model 2 Predict C2 0.3001 0.04000
## Model 1 Predict C1 0.7490 0.64374
## Model 4 Predict C2 0.3219 0.03748