Packages and Helper functions

Packages

rm(list=ls())
suppressPackageStartupMessages(library(ggplot2))
suppressPackageStartupMessages(library(lmerTest))
suppressPackageStartupMessages(library(lme4))
suppressPackageStartupMessages(library(Matrix))
suppressPackageStartupMessages(library(gridExtra))
suppressPackageStartupMessages(library(plyr))
suppressPackageStartupMessages(library(knitr))
suppressPackageStartupMessages(library(irr))
suppressPackageStartupMessages(library(cowplot))
suppressPackageStartupMessages(theme_set(theme_bw()))
opts_chunk$set(fig.width=8, fig.height=5, echo=TRUE, warning=FALSE, message=FALSE, cache=TRUE)
suppressPackageStartupMessages(library("kableExtra"))

Helper Functions

logodds

This function takes a proportion and returns the log odds.

logodds <- function(p){log(p/(1-p))}  

inverse_logodds

This function takes a log odds and returns the equivalent proportion.

inverse_logodds <- function(lo) {exp(lo)/(1+exp(lo)) }  

myCenter

This function outputs the centered values of a variable, which can be a numeric variable, a factor, or a data frame. It was taken from Florian Jaeger’s blog - https://hlplab.wordpress.com/2009/04/27/centering-several-variables/.

From his blog:

  • If the input is a numeric variable, the output is the centered variable

  • If the input is a factor, the output is a numeric variable with centered factor level values. That is, the factor’s levels are converted into numerical values in their inherent order (if not specified otherwise, R defaults to alphanumerical order). More specifically, this centers any binary factor so that the value below 0 will be the 1st level of the original factor, and the value above 0 will be the 2nd level.

  • If the input is a data frame or matrix, the output is a new matrix of the same dimension and with the centered values and column names that correspond to the colnames() of the input preceded by “c” (e.g. “Variable1” will be “cVariable1”).

myCenter = function(x) {
  if (is.numeric(x)) { return(x - mean(x, na.rm = T)) }
    if (is.factor(x)) {
        x= as.numeric(x)
        return(x - mean(x, na.rm = T))
    }
    if (is.data.frame(x) || is.matrix(x)) {
        m= matrix(nrow = nrow(x), ncol = ncol(x))
        colnames(m)= paste("c", colnames(x), sep = "")
    
        for (i in 1:ncol(x)) {
        
            m[,i] = myCenter(x[,i])
        }
        return(as.data.frame(m))
    }
}

lizCenter

This function provides a wrapper around myCenter allowing you to center a specific list of variables from a data frame.

  • x: data frame
  • listfname: a list of the variables to be centered (e.g. list(variable1, variable2))

The output is a copy of the data frame with a column (always a numeric variable) added for each of the centered variables. These columns are labelled with the each column’s previous name, but with “.ct” appended (e.g., “variable1” will become “variable1.ct”).

lizCenter = function(x, listfname) 
{
    for (i in 1:length(listfname)) 
    {
        fname = as.character(listfname[i])
        x[paste(fname,".ct", sep="")] = myCenter(x[fname])
    }
        
    return(x)
}

lizContrasts

This function can be used to create two centered dummy variables which stand in place of a three-way factor (condition). This allows us to inspect each contrast separately, as well as their interactions with other factors. Other fixed effects in the model can be evaluated as the average effects across all levels of the factor.

The function takes a data frame (d), a factor from that database (condition), which must have three levels, and the name of the level of the factor which is to be used as the baseline for the contrasts (base level).

For example, if d is a data frame with a factor “condition” with three levels “(”lex_skew“,”lex_noskew“, and”mixed“) then lizContrasts(d, d$condition,”lex_no_skew“) returns a data frame with two (numeric) columns added labelled”lex_noskew_VERSUS_lex_mixed" and “lex_noskew_VERSUS_lex_skew”.

Wherever you would normally use “condition” in a formula in an LME, it can be replaced by (lex_noskew_VERSUS_lex_mixed + “lex_noskew_VERSUS_lex_skew) e.g. ~ (a * condition) becomes ~ (a * (lex_noskew_VERSUS_lex_mixed + lex_noskew_VERSUS_lex_skew)).

lizContrasts = function(d, condition, baselevel) 
{
  
    condition = factor(condition)
  condition = relevel(condition, baselevel)
    a = contrasts(condition)-apply(contrasts(condition),2,mean)
    d$dummy1[condition == rownames(a)[1]] <- a[1] 
    d$dummy1[condition == rownames(a)[2]] <- a[2] 
    d$dummy1[condition == rownames(a)[3]] <- a[3] 
    d$dummy2[condition == rownames(a)[1]] <- a[4] 
    d$dummy2[condition == rownames(a)[2]] <- a[5] 
    d$dummy2[condition == rownames(a)[3]] <- a[6] 
    name1 = paste(baselevel, rownames(a)[2],sep ="_VERSUS_")
    name2 = paste(baselevel, rownames(a)[3],sep ="_VERSUS_")
    d[name1] = d$dummy1 
    d[name2] = d$dummy2 
    d$dummy1 <-NULL 
    d$dummy2 <-NULL 
    
    return(d)
}

get_coeffs

This function allows us to inspect particular coefficients from the output of an LME model by putting them in a table.

  • x: the output returned when running lmer or glmer (i.e. an object of type lmerMod or glmerMod)
  • list: a list of the names of the coefficients to be extracted (e.g. c(“variable1”, “variable1:variable2”))
get_coeffs <- function(x,list){(as.data.frame(summary(x)$coefficients)[list,])}

Bf

This function is equivalent to the Dienes (2008) calculator which can be found here: http://www.lifesci.sussex.ac.uk/home/Zoltan_Dienes/inference/Bayes.htm.

The code was provided by Baguely and Kayne (2010) and can be found here: http://www.academia.edu/427288/Review_of_Understanding_psychology_as_a_science_An_introduction_to_scientific_and_statistical_inference

Bf <- function(sd, obtained, uniform, lower = 0, upper = 1, meanoftheory = 0, sdtheory = 1, tail = 1){
  area <- 0
  if(identical(uniform, 1)){
    theta <- lower
    range <- upper - lower
    incr <- range / 2000
    for (A in -1000:1000){
      theta <- theta + incr
      dist_theta <- 1 / range
      height <- dist_theta * dnorm(obtained, theta, sd)
      area <- area + height * incr
    }
  }else
  {theta <- meanoftheory - 5 * sdtheory
  incr <- sdtheory / 200
  for (A in -1000:1000){
    theta <- theta + incr
    dist_theta <- dnorm(theta, meanoftheory, sdtheory)
    if(identical(tail, 1)){
      if (theta <= 0){
        dist_theta <- 0
      } else {
        dist_theta <- dist_theta * 2
      }
    }
    height <- dist_theta * dnorm(obtained, theta, sd)
    area <- area + height * incr
  }
  }
  LikelihoodTheory <- area
  Likelihoodnull <- dnorm(obtained, 0, sd)
  BayesFactor <- LikelihoodTheory / Likelihoodnull
  ret <- list("LikelihoodTheory" = LikelihoodTheory,"Likelihoodnull" = Likelihoodnull, "BayesFactor" = BayesFactor)
  ret
} 

Bf_table

This works by calling the Bf function above on a dataframe which has the following columns: contrast std.Error estimate h1_sd h1_motivation (some text)

It computes the BF for each row using std.Error and estimate as the model of the data, and h1 as a half normal with mean = 0 and sd = h1_sd and tails = tails.

The function allows for positive and negative h1 values. If sd_theory is negative, since the BF function requires a positive value for sdtheory, both sdtheory and obtained are multiplied by -1.

#df
Bf_table<-function(df)
{
  Bfs = vector('double')
  estimates = vector('double')
  sterrors = vector('double')
  sdtheorys = vector('double')
  contrasts  = as.character(df$contrast)
  motivation = as.character(df$h1_motivation)
  
  df$tails = as.numeric(df$tails)
  for (i in 1:dim(df)[1]){
    #i=5
    sd_error = df$std.Error[i]
    obtained = df$estimate[i]
    stdtheory= df$h1_sd[i]
    tail = df$tails[i]
    
    if(df$h1_sd[i]<0) {
      stdtheory = h1_sd[i]*-1
      obtained = (df$estimate[i]*-1)
    }
    
    
    Bfs[i] = Bf( sd_error, obtained, uniform = 0, meanoftheory = 0, sdtheory=stdtheory , tail = tail)$BayesFactor
    estimates[i] = obtained
    sdtheorys[i] = stdtheory
    sterrors[i] = sd_error
  }
  
  df2 = data.frame(cbind(contrasts,  round(sterrors, 3), round(estimates,3), round(sdtheorys,3),    motivation, df$tails, round(Bfs,3)    )   )
  colnames(df2) = c("contrast", "std.Error", "estimate", "sdtheory",   "h1 motivation", "tail", "Bf" )
  
  return(df2)
  
  df2[i,]
  
}

Bf_range

This works with the Bf function above. It requires the obtained mean and SE for the current sample and works out what the BF would be for a range of predicted means (which are set to be sdtheoryrange (with meanoftheory=0).

Bf_range<-function(sd, obtained, meanoftheory=0, sdtheoryrange, tail=1)
{
  
  x = c(0)
  y = c(0)
  
  for(sdi in sdtheoryrange)
  {
    #sdi = sdtheoryrange[1]
    B = as.numeric(Bf(sd, obtained, meanoftheory=0, uniform = 0, sdtheory=sdi, tail)[3])
    
    #following line corrects for the fact that the calcuator does not correctly compute BF when sdtheory==0; this code ensures that if sdtheory ==0, BF=1
    
    if (sdi ==0 ) {B=1}
    
    x= append(x,sdi)  
    y= append(y,B)
    output = cbind(x,y)
    
  } 
  output = output[-1,] 
  colnames(output) = c("sdtheory", "BF")
  return(output) 
}

addBF_powercalc

This function takes as its input the output of BF table above (BF_df), the number of participants in the current sample (N) and the maximum number of participants we would consider testing. It then works out what the minimal N would be to get a substantial BF, using the principle that the standard error is proportional to the square root of new-sample/ N (see Dienes video: https://www.youtube.com/watch?v=10Lsm_o_GRg “how many participants might I need”). It returns the table with an additional column saying how many participants would be needed to get substantial evidence for the null or H1, or that we still wouldn’t have substantial evidence either way even with the maximum number of participants.

addBf_powercalc <-function(Bf_df, N, max)
{
  Number = vector() 
  for (b in 1:dim(Bf_df)[1]){
    for(newN in N : max){
      
      BF=  as.numeric(Bf(sd=as.numeric(as.character(Bf_df$std.Error[b]))*sqrt(N/newN), obtained=as.numeric(as.character(Bf_df$estimate[b])), meanoftheory=0, uniform = 0, sdtheory=as.numeric(as.character(Bf_df$h1_sd[b])), tail=as.numeric(as.character(Bf_df$tail[b]))))[3]
      
      if(BF>=3) {
        Number[b] = paste ("evidence for H1 with", newN, "participants")
        break
      } else if (BF< (1/3)){
        Number[b] = paste ("evidence for H0 with", newN, "participants")
        break
      } else if(max== newN) { 
        Number[b] = paste ("evidence still ambiguous with", max, "participants")
      }
    }
        
  }
  df2 = cbind(Bf_df, Number)
  colnames(df2)[ncol(df2)]= "N needed"
  return (df2 )
}
      

addBf_ranges

This function takes as its input a dataframe which is the output of the BF_table function. Each row provides the values which give the model of the data. It also requires a range of values to test as sd of h1. The function adds an additional column to this table which writes out the range of values within this range which give values which meet the criteria for: (i) strong evidence for null (BF < 1/10); substantial evidence for null (BF < 1/3); ambiguous (3 > BF > 1/3 ); substantial evidence for H1 (BF >= 10).

addBf_ranges <-function(Bf_df, sdtheoryrange)
{
  
  BFranges = vector()  
  for (b in 1:dim(Bf_df)[1]){
    #b=1
  range = Bf_range(sd=as.numeric(as.character(Bf_df$std.Error[b])), obtained=as.numeric(as.character(Bf_df$estimate[b])), meanoftheory=0, sdtheoryrange=sdtheoryrange, tail=as.numeric(as.character(Bf_df$tail[b])))
  
  from_table = vector()
  to_table = vector()
  cat = vector()
  category_table = vector()
  
  for(i in 1:dim(range)[1]) {       # go through each value in the range and categorize it
  
  #range = read.csv("range.csv")
  #i=1
  
  #categorize current BF 
  if (range[i,2] <= (1/10)) { 
  cat[i] = "strong_null"      ## NOT below or equal to 1/10
  } else if (range[i,2] <= (1/3)) {
  cat[i] = "subst_null" ## NOT below or equal to 1/10, IS below or equal to 1/3 
  } else if (range[i,2] < 3) { ## NOT below or equal to 1/10, NOT below or equal to 1/3, IS below 3 
  cat[i] = "ambiguous"
  } else if (range[i,2] >= 10) {## NOT below or equal to 1/10, NOT below or equal to 1/3, NOT below 3, IS above or equal to 10 
  cat[i] = "strong_h1"
  } else {                ## NOT below or equal to 1/10, NOT below or equal to 1/3, NOT below 3, NOT above or equal to 10 
  cat[i] = "subst_h1"
  }
  
  # adjust the table
  j = length(category_table) 
  
  if (i==1){                      # first one
  category_table[j+1] = cat[i]   
  from_table[j+1] = range[i,1]
  
  } else if (cat[i] != cat[i-1]) { # NOT the first one, IS one where need to start new range 
  to_table[j] = range[i-1,1]
  category_table[j+1] = cat[i]
  from_table[j+1] = range[i,1]
  } 
  
  if (i==dim(range)[1]){        # if its the last one, finish off the table
  to_table[j] = range[i,1]
  }
  }
  
  
  # go through the little table and turn it int a string of ranges  
  string = ""
  for(i in 1: length(category_table)){
  string = paste(string, category_table[i], ": From" , round(from_table[i],4) , "To" , round(to_table[i],4) , ";")
  }
  
  BFranges[b] = string
  }
  out = cbind(Bf_df, BFranges)
  return(out)
}

Load and set up data sets

#setwd("datafiles")
#getwd()
pcpt <- read.csv("PCPT.csv")
dis <- read.csv("Discrimination.csv")
train <- read.csv("Training.csv")
PI <- read.csv("Picture Identification.csv")
PRO <- read.csv("Production_all.csv")
setwd("..")
The working directory was changed to C:/Users/liz/Dropbox/SharedFolders/mandarin_paper_liz_hanyu_helen_megan/PeerJ_Revision/key files inside a notebook chunk. The working directory will be reset when the chunk is finished running. Use the knitr root.dir option in the setup chunk to change the working directory for notebook chunks.
#Set up aptitude measure- i.e. pcpt pre-test scores 
pcpt$condition = factor(pcpt$condition, levels = c("0", "1", "2"), labels = c("LV", "HV", "HVB"))
pcpt$session = factor(pcpt$session, levels = c("1", "2"), labels = c("Pre-test", "Post-test"))
ID_pre = with(subset(pcpt, session == "Pre-test"), aggregate(accuracy ~ subject + session + condition, FUN = mean))
ID_pre$session = NULL
ID_pre$aptitude = ID_pre$accuracy *10
# Set up date for PI 
PI$condition = factor(PI$condition, levels = c("0", "1", "2"), labels = c("LV", "HV", "HVB"))
PI$voicetype = factor(PI$voicetype, levels = c("nv1", "tv1"), labels = c("New voice", "Trained voice"))
PI2 = merge(ID_pre, PI)
# setting up production data
PRO$session = factor(PRO$session, levels = c("pretest", "posttest", "nativespeaker", "picturenaming"), labels = c("Pre-test", "Post-test","Native-Speaker", "Picture-Naming"))
#Get rid of participant48's production data and other unidentifiable trials with quality issues. 
PRO = PRO[!(PRO$subject=="48"),]
PRO = PRO[PRO$tone != 0, ]
PC = subset(PRO, session == "Native-Speaker")
CWR = subset(PRO, session %in% c("Pre-test", "Post-test"))
CPN = subset(PRO, session == "Picture-Naming")
## Set up data for Word Repetition
NWR = CWR
NWR$condition = factor(NWR$condition, levels = c("0", "1","2"), labels = c("LV", "HV", "HVB"))
NWR$wordtype = factor(NWR$wordtype, levels = c("trained", "untrained"), labels = c("Trained", "Untrained"))
NWR2 = merge(NWR, ID_pre)
## Set up data for Picture Naming 
NPN = CPN
NPN$condition = factor(NPN$condition, levels = c("0", "1", "2"), labels = c("LV", "HV", "HVB"))
NPN2 = merge(ID_pre, NPN)
# Set up date for 3IO
dis$condition = factor(dis$condition, levels = c("0", "1", "2"), labels = c("LV", "HV", "HVB"))
dis$session = factor(dis$session, levels = c("1", "2"), labels = c("Pre-test", "Post-test"))
dis$trialtype = dis$voicetype = factor(dis$voicetype, levels = c("fff", "ffm", "fmf"), labels = c("Neutral", "Easy", "Hard"))
dis$wordtype = factor(dis$wordtype, levels = c("newword", "oldword"), labels = c("Untrained Item", "Trained Item"))
dis2 = merge(ID_pre, dis)
#Set up date for Training
train$condition = factor(train$condition, levels = c("0","2","1"), labels = c("LV", "HVB", "HV"))
train2 = merge(ID_pre, train)

Evidence for/ against hypothesis of greater generalization to novel voices/ in production after multiple voice training

Notes

Key goal is to look for evidence for the hypothesis that participants show better generalization to novel voices, or in production, after HV or HVB training than after LV training.

Points: 1) To have maximum data (maximize the evidence), we look for evidence for greater learning in both HV/HVB than in LV. To get this, we code a binary contrast LV versus HV+HVB (centered). For the post-tests (Picture ID/Picture Naming) we are interested in the evidence for a main effect of this contrast. For the pre-post tests, we are interested in the interaction between this contrast and session. 2) In PI test only untrained voice test items are relevant since we don’t predict better performance with HV for trained items. 3) In 3IO test and Word Repetition: we look at trained and untrained items combined (to maximize power and because it’s all generalization away from trained voice) 4) In the production measures - Word Rep and Picture Naming - we are interested in both tone and pinyin. The latter is relevant given the findings of Barcroft and Sommers that there is more generalization in production after exposure to multiple voices. However for Word Rep, since we didn’t see any overall improvement from pre- to post-, we don’t look at whether there is evidence that they improve more in one condition that the other. 6) Following Diennes 2008, we model H1 using a half normal with a mean of 0 and an sd set to be an estimate of H1. This biases smaller values. We take our estimates of H1 using estimates from elsewhere within the data which inform the likely size of the effect. The method for informing each one is explained below. We take BF < 1/3 to be substantial evidence for the null and BF > 3 to be substantial evidence for H1; values in between are ambiguous. In addition to computing the BF for a particular value of H1 for each contrast, we also compute a “robustness region” for each Bayes factor which shows the range of values of H1 over which we have ambiguous/substantial-evidenceH1/ substantial-evidenceH0.

Picture Identification

Run lmer model with just novel voices data and factor condition2 levels LV versus HV+HVB

PI2$condition2 = PI2$condition
PI2$condition2[PI2$condition=="HVB"]= "HV"
PI2.NewVoice = subset (PI2, voicetype == "New voice")
PI2.NewVoice= lizCenter(PI2.NewVoice, list("voicetype", "aptitude", "condition2"))
p_glmer = glmer(score ~  condition2.ct
                + (1|subject)
                , family = "binomial", 
                control = glmerControl(optimizer = "bobyqa"), 
                data = PI2.NewVoice)
kable(round(summary(p_glmer)$coeff,3))
Estimate Std. Error z value Pr(>|z|)
(Intercept) 1.71 0.111 15.364 0.000
condition2.ct 0.13 0.228 0.570 0.569

Model of the data is the estimate and st.error for condition2.ct

Value to inform H1:

Work out a plausible maximum - and use half this as the estimate - using the intercept (grand mean).

  • if “LV” is the logodds of the proportion correct in LV condition
  • “HV” is the logodds of the proportion correct in HV condtion

Assumptions for plausible maximum:
- minimal performance in LV - i.e. at chance (note chance here is logodds of 50%, i.e. 0) - above chance in HV, meaning that post-test drives all of the effect

thus: condition = (HV - chance) - (LV - chance) = HV - chance - 0 = HV - 0 = HV

intercept = (HV+LV))/2 = (HV+0))/2 = HV/2

2 * intercept = HV condition = 2 * intercept

since this is an estimate of the maximum value of condition, we halve it to get our estimate to use as the sd of the half normal:

sdtheory = intercept

values for Bayes factor calculation (computed below)

contrast = "PI, NovelVoice: LV versus HV"
std.Error = round(summary(p_glmer)$coeff["condition2.ct", "Std. Error" ],3)
estimate = round(summary(p_glmer)$coeff["condition2.ct", "Estimate" ],3)
h1_sd = round(summary(p_glmer)$coeff["(Intercept)", "Estimate" ],3)
tails = 1
PI_table = data.frame(cbind(contrast, std.Error, estimate, h1_sd,tails))  
kable(PI_table)
contrast std.Error estimate h1_sd tails
PI, NovelVoice: LV versus HV 0.228 0.13 1.71 1

Picture Naming: tone accuracy

Run lmer model with factor condition2 levels LV versus HV+HVB

NPN2$condition2 = NPN2$condition
NPN2$condition2[NPN2$condition=="HVB"]= "HV"
NPN2 = lizCenter(NPN2, list("condition2"))
NPN_glmer = glmer(tone_score ~ condition2.ct
                  + (1|subject)
                  , family = "binomial", 
                  control = glmerControl(optimizer = "bobyqa"), 
                  data = NPN2)
kable(round(summary(NPN_glmer)$coeff,3))
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.023 0.080 -0.284 0.776
condition2.ct -0.225 0.168 -1.341 0.180

Model of the data is the estimate and st.error for condition2.ct Value to inform H1:

Work out a plausible maximum - and use half this as the estimate - using the intercept (grand mean).

if “LV” is the logodds of the proportion correct in LV condition and “HV” is the logodds of the proportion correct in HV condition

Assumptions for plausible maximum: - chance production of tones in LV condition - 1/4 (i.e. if they were at chance in producing each of four tones) in logodds space - above minimal performance in HV, meaning that HV drives all of the difference between the intercept and chance

thus: condition = (HV - chance) - (LV - chance) = HV - logodds(1/4) - 0 = HV - logodds(1/4)

intercept = (HV+LV)/2 2 intercept = HV + LV = HV + logodds(1/4) HV = 2 intercept - logodds(1/4)

subbing in for HV:

condition = (2 * intercept - logodds(1/4)) - logodds(1/4) = 2*(intercept - logodds(1/4))

since this is an estimate of the maximum value of condition, we halve it to get our estimate to use as the sd of the half normal:

sdtheory = intercept - logodds(1/4)

contrast = "PN: LV versus HV"
std.Error = round(summary(NPN_glmer)$coeff["condition2.ct", "Std. Error" ],3)
estimate = round(summary(NPN_glmer)$coeff["condition2.ct", "Estimate" ],3)
h1_sd = round(summary(NPN_glmer)$coeff["(Intercept)", "Estimate" ]- logodds(1/4),3)
tails = 1
PN_table = data.frame(cbind(contrast, std.Error, estimate, h1_sd,tails))  
kable(PN_table)
contrast std.Error estimate h1_sd tails
PN: LV versus HV 0.168 -0.225 1.076 1

Picture Naming: pinyin accuracy

Run lmer model with factor condition2 levels LV versus HV+HVB

NPN2$condition2 = NPN2$condition
NPN2$condition2[NPN2$condition=="HVB"]= "HV"
#table(NPN2$condition2, NPN2$condition)
NPN2 = lizCenter(NPN2, list("condition2"))
NPN_pinyin_glmer = glmer(pinyin_score ~ condition2.ct
                  + (1|subject)
                  , family = "binomial", 
                  control = glmerControl(optimizer = "bobyqa"), 
                  data = NPN2)
kable(round(summary(NPN_pinyin_glmer)$coeff,3))
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.212 0.093 -2.289 0.022
condition2.ct 0.104 0.196 0.531 0.595

Model of the data is the estimate and st.error for condition2.ct Value to inform H1:

Work out a plausible maximum - and use half this as the estimate - using the intercept (grand mean).

if “LV” is the logodds of the proportion correct in LV condition and “HV” is the logodds of the proportion correct in HV condition

Assumptions for plausible maximum: - minimal performance in LV - this is actually 0% but we can’t compute this in logodds space. We therefore set this to be equivalent to one correct (i.e. 1/72) in logodds space - above minimal performance in HV, meaning that HV drives all of the difference between the intercept and the minimum

thus: condition = (HV - minimum) - (LV - minimum) = HV - logodds(1/72) - 0 = HV - logodds(1/72)

intercept = (HV+LV)/2 2 * intercept = HV+LV = HV + logodds(1/72) HV = 2 * intercept - logodds(1/72)

subbing in for HV:

condition = (2 * intercept - logodds(1/72)) - logodds(1/72) = 2*(intercept - logodds(1/72))

since this is an estimate of the maximum value of condition, we halve it to get our estimate to use as the sd of the half normal:

sdtheory = intercept - logodds(1/72)

contrast = "PN, pinyin: LV versus HV"
std.Error = round(summary(NPN_pinyin_glmer)$coeff["condition2.ct", "Std. Error" ],3)
estimate = round(summary(NPN_pinyin_glmer)$coeff["condition2.ct", "Estimate" ],3)
h1_sd = round(summary(NPN_pinyin_glmer)$coeff["(Intercept)", "Estimate" ]- logodds(1/72),3)
tails = 1
PN_pinyin_table = data.frame(cbind(contrast, std.Error, estimate, h1_sd,tails))  
kable(PN_pinyin_table)
contrast std.Error estimate h1_sd tails
PN, pinyin: LV versus HV 0.196 0.104 4.05 1

Word Repetition: tone accuracy

Run lmer model with factor condition2 levels LV versus HV+HVB

NWR2$condition2 = NWR2$condition
NWR2$condition2[NWR2$condition == "HVB"]= "HV"
NWR2 = lizCenter(NWR2, list("wordtype","condition2"))
nw_glmer = glmer(tone_score ~ wordtype.ct * condition2.ct * session
              + (wordtype.ct * session||subject)
              , family="binomial", 
              control = glmerControl(optimizer = "bobyqa"), 
              data = NWR2)
kable(round(summary(nw_glmer)$coeff,3))
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.958 0.073 13.207 0.000
wordtype.ct -0.020 0.070 -0.288 0.773
condition2.ct 0.060 0.153 0.394 0.693
sessionPost-test 0.395 0.075 5.291 0.000
wordtype.ct:condition2.ct 0.081 0.148 0.546 0.585
wordtype.ct:sessionPost-test 0.132 0.104 1.263 0.207
condition2.ct:sessionPost-test -0.108 0.157 -0.689 0.491
wordtype.ct:condition2.ct:sessionPost-test -0.073 0.220 -0.331 0.741

Model of the data is the estimate and st.error for condition2.ct:sessionPost-test (condition by session)

Value to inform H1:

Work out a plausible maximum - and use half this as the estimate - using the main effect of session (overall difference between pre and post) .

if “LV.POST” is the logodds of the proportion correct in LV condition at posttest “LV.PRE” is the logodds of the proportion correct in LV condition at pretest “HV.POST” is the logodds of the proportion correct in HV condition at posttest “HV.PRE” is the logodds of the proportion correct in HV condition at pretest

Assumptions for plausible maximum: - improvement from pre to post in HV - NO improvement from pre to post in LV i.e. LV.POST=LV.PRE, meaning all of the effect of session comes from the HV condition

thus: condition by session = (HV.POST - HV.PRE) - (LV.POST- LV.PRE) = (HV.POST - HV.PRE)

session
= ((HV.POST - HV.PRE) + (LV.POST- LV.PRE))/2 = (HV.POST - HV.PRE)/2

2 * session = (HV.POST - HV.PRE)

condition by session = 2 * session

since this is an estimate of the maximum value of condition, we halve it to get our estimate to use as the sd of the half normal:

sdtheory = session
(which is sessionPost-test in the above model)

Tone_accuracy

contrast =  "Word Repetition, Tone score: LV versus HV by session"
std.Error = round(summary(nw_glmer)$coeff["condition2.ct:sessionPost-test", "Std. Error" ],3)
estimate = round(summary(nw_glmer)$coeff["condition2.ct:sessionPost-test", "Estimate" ],3)
h1_sd = round(summary(nw_glmer)$coeff["sessionPost-test", "Estimate" ],3)
tails = 1
wordrep_table = data.frame(cbind(contrast, std.Error, estimate, h1_sd,tails))  
kable(wordrep_table)
contrast std.Error estimate h1_sd tails
Word Repetition, Tone score: LV versus HV by session 0.157 -0.108 0.395 1

Word Repetition: pinyin accuracy

Run lmer model with factor condition2 levels LV versus HV+HVB

nw_glmer_pinyin = glmer(pinyin_score ~ wordtype.ct * condition2.ct * session
              + (wordtype.ct * session||subject)
              , family="binomial", 
              control = glmerControl(optimizer = "bobyqa"), 
              data = NWR2)
unable to evaluate scaled gradientModel failed to converge: degenerate  Hessian with 1 negative eigenvalues
kable(round(summary(nw_glmer_pinyin)$coeff,3))
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.181 0.047 3.871 0.000
wordtype.ct 0.204 0.066 3.067 0.002
condition2.ct -0.019 0.099 -0.190 0.850
sessionPost-test 0.152 0.045 3.366 0.001
wordtype.ct:condition2.ct 0.069 0.140 0.494 0.621
wordtype.ct:sessionPost-test 0.055 0.090 0.614 0.539
condition2.ct:sessionPost-test -0.034 0.095 -0.356 0.722
wordtype.ct:condition2.ct:sessionPost-test -0.013 0.189 -0.070 0.944

Model of the data is the estimate and st.error for condition2.ct:sessionPost-test (condition by session)

Value to inform H1:

Work out a plausible maximum - and use half this as the estimate - using the main effect of session (overall difference between pre and post) .

if “LV.POST” is the logodds of the proportion correct in LV condition at posttest “LV.PRE” is the logodds of the proportion correct in LV condition at pretest “HV.POST” is the logodds of the proportion correct in HV condition at posttest “HV.PRE” is the logodds of the proportion correct in HV condition at pretest

Assumptions for plausible maximum: - improvement from pre to post in HV - NO improvement from pre to post in LV i.e. LV.POST=LV.PRE, meaning all of the effect of session comes from the HV condition

thus: condition by session = (HV.POST - HV.PRE) - (LV.POST- LV.PRE) = (HV.POST - HV.PRE)

session
= ((HV.POST - HV.PRE) + (LV.POST- LV.PRE))/2 = (HV.POST - HV.PRE)/2

2 * session = (HV.POST - HV.PRE)

condition by session = 2 * session

since this is an estimate of the maximum value of condition, we halve it to get our estimate to use as the sd of the half normal:

sdtheory = session
(which is sessionPost-test in the above model)

contrast =  "Word Repetition, Pinyin score: LV versus HV by session"
std.Error = round(summary(nw_glmer_pinyin)$coeff["condition2.ct:sessionPost-test", "Std. Error" ],3)
estimate = round(summary(nw_glmer_pinyin)$coeff["condition2.ct:sessionPost-test", "Estimate" ],3)
h1_sd = round(summary(nw_glmer_pinyin)$coeff["sessionPost-test", "Estimate" ],3)
tails = 1
wordrep_table_pinyin = data.frame(cbind(contrast, std.Error, estimate, h1_sd,tails))  
kable(wordrep_table_pinyin)
contrast std.Error estimate h1_sd tails
Word Repetition, Pinyin score: LV versus HV by session 0.095 -0.034 0.152 1

Three-interval Oddity Discrimination Task

Run lmer model with factor condition2 levels LV versus HV+HVB

dis2$condition2 = dis2$condition
dis2$condition2[dis2$condition == "HVB"]= "HV"
dis2 = lizCenter(dis2, list( "wordtype", "aptitude","condition2"))
dis2 = lizContrasts(dis2, dis2$condition, "LV")
dis2 = lizContrasts(dis2, dis2$trialtype, "Neutral")
d_glmer = glmer(score ~ session * wordtype.ct *  condition2.ct
                + (Neutral_VERSUS_Easy + Neutral_VERSUS_Hard): session
                + (Neutral_VERSUS_Easy + Neutral_VERSUS_Hard) 
                + (session * wordtype.ct||subject)
                , family = "binomial", 
                control = glmerControl(optimizer = "bobyqa"), 
                data = dis2)
kable(round(summary(d_glmer)$coeff,3))
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.396 0.059 6.711 0.000
sessionPost-test 0.310 0.048 6.523 0.000
wordtype.ct -0.314 0.064 -4.951 0.000
condition2.ct 0.061 0.125 0.486 0.627
Neutral_VERSUS_Easy 0.400 0.079 5.091 0.000
Neutral_VERSUS_Hard -0.138 0.077 -1.805 0.071
sessionPost-test:wordtype.ct 0.138 0.091 1.511 0.131
sessionPost-test:condition2.ct -0.001 0.100 -0.014 0.989
wordtype.ct:condition2.ct -0.020 0.134 -0.150 0.881
sessionPost-test:Neutral_VERSUS_Easy -0.270 0.113 -2.395 0.017
sessionPost-test:Neutral_VERSUS_Hard 0.117 0.111 1.056 0.291
sessionPost-test:wordtype.ct:condition2.ct 0.010 0.193 0.050 0.960

Model of the data is the estimate and st.error for sessionPost-test:condition2.ct (condition by session)

Value to inform H1 (note: this is identical to wordrep above):

Work out a plausible maximum - and use half this as the estimate - using the main effect of session (overall difference between pre and post) .

if “LV.POST” is the logodds of the proportion correct in LV condition at posttest “LV.PRE” is the logodds of the proportion correct in LV condition at pretest “HV.POST” is the logodds of the proportion correct in HV condition at posttest “HV.PRE” is the logodds of the proportion correct in HV condition at pretest

Assumptions for plausible maximum: - improvement from pre to post in HV - NO improvement from pre to post in LV i.e. LV.POST=LV.PRE, meaning all of the effect of session comes from the HV condition

thus: condition by session = (HV.POST - HV.PRE) - (LV.POST- LV.PRE) = (HV.POST - HV.PRE)

session
= ((HV.POST - HV.PRE) + (LV.POST- LV.PRE))/2 = (HV.POST - HV.PRE)/2

2 * session = (HV.POST - HV.PRE)

condition by session = 2 * session

since this is an estimate of the maximum value of condition, we halve it to get our estimate to use as the sd of the half normal:

sdtheory = session
(which is sessionPost-test in the above model)

contrast = "3 Int. Odd, Tone score: LV by HV by session"
std.Error = round(summary(d_glmer)$coeff["sessionPost-test:condition2.ct", "Std. Error" ],3)
estimate = round(summary(d_glmer)$coeff["sessionPost-test:condition2.ct", "Estimate" ],3)
h1_sd = round(summary(d_glmer)$coeff["sessionPost-test", "Estimate" ],3)
tails = 1
discrim_table = data.frame(cbind(contrast, std.Error, estimate, h1_sd,tails))  
kable(discrim_table)
contrast std.Error estimate h1_sd tails
3 Int. Odd, Tone score: LV by HV by session 0.1 -0.001 0.31 1

compute bFs and ranges

#df = rbind(wordrep_table,discrim_table, wordrep_ap_table, discrim_ap_table)
df= rbind(PI_table, PN_table, PN_pinyin_table, wordrep_table, wordrep_table_pinyin, discrim_table)
df$std.Error = as.numeric(as.character(df$std.Error))
df$estimate = as.numeric(as.character(df$estimate))
df$h1_sd = as.numeric(as.character(df$h1_sd))
df$tails = as.numeric(as.character(df$tails))
df$h1_motivation = ""
df2 = Bf_table(df)
#kable(df2)
#kable(addBf_ranges2(df2))
sdtheoryrange = seq( from = 0, to = 10 , length.out = 100)
addBf_ranges(df2,sdtheoryrange ) %>%
  kable() %>%
  kable_styling()
contrast std.Error estimate sdtheory h1 motivation tail Bf BFranges
PI, NovelVoice: LV versus HV 0.228 0.13 1.71 1 0.219 ambiguous : From 0 To 1.0101 ; subst_null : From 1.1111 To 3.7374 ; strong_null : From 3.8384 To 10 ;
PN: LV versus HV 0.168 -0.225 1.076 1 0.067 ambiguous : From 0 To 0.101 ; subst_null : From 0.202 To 0.7071 ; strong_null : From 0.8081 To 10 ;
PN, pinyin: LV versus HV 0.196 0.104 4.05 1 0.08 ambiguous : From 0 To 0.9091 ; subst_null : From 1.0101 To 3.0303 ; strong_null : From 3.1313 To 10 ;
Word Repetition, Tone score: LV versus HV by session 0.157 -0.108 0.395 1 0.239 ambiguous : From 0 To 0.202 ; subst_null : From 0.303 To 0.9091 ; strong_null : From 1.0101 To 10 ;
Word Repetition, Pinyin score: LV versus HV by session 0.095 -0.034 0.152 1 0.421 ambiguous : From 0 To 0.202 ; subst_null : From 0.303 To 0.7071 ; strong_null : From 0.8081 To 10 ;
3 Int. Odd, Tone score: LV by HV by session 0.1 -0.001 0.31 1 0.303 ambiguous : From 0 To 0.202 ; subst_null : From 0.303 To 0.9091 ; strong_null : From 1.0101 To 10 ;

We have substantial evidence for the null in each test apart from Word Rep pinyin. Note that robustness regions indicate that we would have also have obtained substantial BFs using smaller values than those we used. Note that for evidence for H0, the maximum scale factor is always infinity.

Evidence for/against hypothesis that participants with higher aptitude benefit more from HV training

Notes:

Points: 1) Unlike above, we also look at training data here. This is because in Sadakata & McQueen the interaction was found with trained items, so it isn’t just generalization trials that are relevant. 2) To have maximum data, for all of the tasks except training, we do what we did above and look for evidence for greater learning in both HV/HVB than in LV, i.e. we code a binary contrast LV versus HV+HVB (centered). For training, we saw in the previous analysis that HV and HVB are quite different, so here we look at LV v HV and LV v HVB separately (also note we have a lot more data points per participant for training). 3) For the post-tests (PI/PN) we are interested in the evidence for an interaction between the condition contrast and aptitude. For the pre-post tests, we are interested in the interaction between thecontrast, aptitude and session. For training we look at the evidence for an interaction between each condition contrast and aptitude. (NOTE: interaction with session would also be relevant but the model won’t converge with that in - even if we remove all of the slopes.) 4) For PI test, to have maximum power, we look at combination of trained and untrained voices (note: this is unlike what we did above, where we only looked at untrained voices. However, for the interaction with aptitude, one previous study found evidence with trained voices, and one with untrained, so we look across both). 5) In 3IO and Word Repetition test: we look at trained and untrained items combined. 6) In the production measures - WR and PN - we are only interested in tone since our aptitude measure is relevant for tone.

  1. As above, following Dienes 2008, we model H1 using a half normal with a mean of 0 and an sd set to be an estimate of H1. This biases smaller values. We take our estimates of H1 using estimates from elsewhere within the data which inform the likely size of the effect. The method for informing each one is explained below. We take BF < 1/3 to be substnatial evidence for the null and BF > 3 to be substantial evidence for H1; values in between are ambiguous. In addition to computing the BF for a particular value of H1 for each contrast, we also compute a “robustness region” for each Bayes factor which shows the range of values of H1 over which we have ambiguous/substantial-evidenceH1/ substantial-evidenceH0)

Picture Identification

Run lmer model factor condition2 levels LV versus HV+HVB

PI2$condition2 = PI2$condition
PI2$condition2[PI2$condition=="HVB"]= "HV"
PI2 = lizCenter(PI2, list("voicetype", "aptitude", "condition2"))
p_ap_glmer = glmer(score ~  condition2.ct *aptitude.ct
                + (1|subject)
                , family = "binomial", 
                control = glmerControl(optimizer = "bobyqa"), 
                data = PI2)
kable(round(summary(p_ap_glmer)$coeff,3))
Estimate Std. Error z value Pr(>|z|)
(Intercept) 2.094 0.116 18.087 0.000
condition2.ct -0.166 0.241 -0.690 0.490
aptitude.ct 0.171 0.064 2.699 0.007
condition2.ct:aptitude.ct 0.006 0.127 0.044 0.965

Model of the data is the estimate and st.error for condition2.ct:aptitude.ct (condition by aptitude)

Value to inform H1:

Work out a plausible maximum - and use half this as the estimate - using the main effect of aptitude (= aptitude.ct - overall difference in performance on test for one step in aptitude (which is set to be 10% points on the pcpt test)).

if “LV.aptitude” is the effect of aptitude in the LV condition “HV.aptitude” is the effect of aptitude in the HV condition

Assumptions for plausible maximum: - positive value of aptitudein HV - NO effect of aptitude in LV (=0) [note: no reason to think we would get NEGATIVE effect of aptitude in either condition, based on previous data]

thus: condition by aptitude = HV.aptitude-LV.aptitude = HV.aptitude-0

aptitude = (HV.aptitude+LV.aptitude)/2 = (HV.aptitude)/2

2 * aptitude = HV.aptitude-0

condition by aptitude = 2 * aptitude

since this is an estimate of the maximum value of condition, we halve it to get our estimate to use as the sd of the half normal:

sdtheory = aptitude
(which is aptitude.ct in the above model)

contrast = "PI_ap: LV versus HV by aptitude"
std.Error = round(summary(p_ap_glmer)$coeff["condition2.ct:aptitude.ct", "Std. Error" ],3)
estimate = round(summary(p_ap_glmer)$coeff["condition2.ct:aptitude.ct", "Estimate" ],3)
h1_sd = round(summary(p_ap_glmer)$coeff["aptitude.ct", "Estimate" ],3)
tails = 1
PI_ap_table = data.frame(cbind(contrast, std.Error, estimate, h1_sd,tails))  
kable(PI_ap_table)
contrast std.Error estimate h1_sd tails
PI_ap: LV versus HV by aptitude 0.127 0.006 0.171 1

Picture Naming

Run lmer model with factor condition2 levels LV versus HV+HVB

NPN2$condition2 = NPN2$condition
NPN2$condition2[NPN2$condition=="HVB"]= "HV"
NPN2 = lizCenter(NPN2, list("condition2","aptitude"))
NPN_ap_glmer = glmer(tone_score ~ condition2.ct*aptitude.ct
                  + (1|subject)
                  , family = "binomial", 
                  control = glmerControl(optimizer = "bobyqa"), 
                  data = NPN2)
kable(round(summary(NPN_ap_glmer)$coeff,3))
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.020 0.076 -0.259 0.795
condition2.ct -0.201 0.161 -1.252 0.211
aptitude.ct 0.099 0.042 2.350 0.019
condition2.ct:aptitude.ct 0.042 0.083 0.502 0.616

Model of the data is the estimate and st.error for condition2.ct:aptitude.ct (condition by aptitude)

Value to inform H1:

Work out a plausible maximum - and use half this as the estimate - using the main effect of aptitude (= aptitude.ct - overall difference in performance on test for one step in aptitude (which is set to be 10% points on the pcpt test)).

if “LV.aptitude” is the effect of aptitude in the LV condition “HV.aptitude” is the effect of aptitude in the HV condition

Assumptions for plausible maximum: - positive value of aptitude in HV - NO effect of aptitude in LV (=0) [note: no reason to think would get NEGATIVE effect of aptitude in either condition, based on previous data]

thus: condition by aptitude = HV.aptitude-LV.aptitude = HV.aptitude-0

aptitude = (HV.aptitude+LV.aptitude)/2 = (HV.aptitude)/2

2 * aptitude = HV.aptitude-0

condition by aptitude = 2 * aptitude

since this is an estimate of the maximum value of condition, we halve it to get our estimate to use as the sd of the half normal:

sdtheory = aptitude
(which is aptitude.ct in the above model)

contrast = "PN_ap: LV versus HV by aptitude"
std.Error = round(summary(NPN_ap_glmer)$coeff["condition2.ct:aptitude.ct", "Std. Error" ],3)
estimate = round(summary(NPN_ap_glmer)$coeff["condition2.ct:aptitude.ct", "Estimate" ],3)
h1_sd = round(summary(NPN_ap_glmer)$coeff["aptitude.ct", "Estimate" ],3)
tails = 1
PN_ap_table = data.frame(cbind(contrast, std.Error, estimate, h1_sd,tails))  
kable(PN_ap_table)
contrast std.Error estimate h1_sd tails
PN_ap: LV versus HV by aptitude 0.083 0.042 0.099 1

Word Repetition

Run lmer model with factor condition2 levels LV versus HV+HVB. Also run second model which is used just to get overall value of aptitude at pre-test

NWR2$condition2 = NWR2$condition
NWR2$condition2[NWR2$condition == "HVB"]= "HV"
NWR2 = lizCenter(NWR2, list("session", "aptitude",  "wordtype","condition2"))
NWR2 = lizContrasts(NWR2, NWR2$condition, "LV")
nw_glmer_pcpt = glmer(tone_score ~ wordtype.ct * condition2.ct * session.ct 
              + aptitude.ct
              + aptitude.ct : session.ct
              + aptitude.ct : condition2.ct: session.ct 
              + aptitude.ct : condition2.ct: session.ct : wordtype.ct
              + (wordtype.ct * session.ct||subject)
              , family="binomial" , 
              control = glmerControl(optimizer = "bobyqa"), 
              data = NWR2)
kable(round(summary(nw_glmer_pcpt)$coeff,3))
Estimate Std. Error z value Pr(>|z|)
(Intercept) 1.157 0.054 21.389 0.000
wordtype.ct 0.046 0.052 0.880 0.379
condition2.ct 0.027 0.114 0.238 0.812
session.ct 0.415 0.075 5.562 0.000
aptitude.ct 0.073 0.029 2.469 0.014
wordtype.ct:condition2.ct 0.044 0.110 0.402 0.688
wordtype.ct:session.ct 0.129 0.104 1.234 0.217
condition2.ct:session.ct -0.102 0.157 -0.652 0.514
session.ct:aptitude.ct 0.031 0.042 0.754 0.451
wordtype.ct:condition2.ct:session.ct -0.072 0.219 -0.330 0.741
condition2.ct:session.ct:aptitude.ct 0.091 0.082 1.113 0.266
wordtype.ct:condition2.ct:session.ct:aptitude.ct -0.052 0.113 -0.461 0.645
NWR3pre = subset(NWR2, session== "Pre-test")
NWR3pre = lizCenter(NWR3pre, list("session", "aptitude",  "wordtype","condition"))
NWR3pre = lizContrasts(NWR3pre, NWR3pre$condition, "LV")
nw_glmer_pcpt_PRE = glmer(tone_score ~ wordtype.ct * condition2.ct
              + aptitude.ct
                          + aptitude.ct :condition2.ct
              + aptitude.ct : condition2.ct : wordtype.ct
              + (wordtype.ct ||subject)
              , family="binomial" , 
              control = glmerControl(optimizer = "bobyqa"), 
              data = NWR3pre)
kable(round(summary(nw_glmer_pcpt_PRE)$coeff,3))
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.954 0.071 13.463 0.000
wordtype.ct -0.020 0.070 -0.282 0.778
condition2.ct 0.074 0.150 0.493 0.622
aptitude.ct 0.059 0.039 1.498 0.134
wordtype.ct:condition2.ct 0.081 0.148 0.547 0.584
condition2.ct:aptitude.ct -0.031 0.078 -0.395 0.693
wordtype.ct:condition2.ct:aptitude.ct 0.008 0.077 0.104 0.918

Model of the data is the estimate and st.error for session.ct:condition2.ct:aptitude.ct (session by aptitude by condition)

For our estimate of H1, need to work out a plausible maximum value of aptitude that we expect in any cell - we get this from the scale.

We do this as follows: maximum possible value of aptitude (max_aptitude), working from the scale: Assume participant with minimal value of aptitude is at chance in producing one of the four tones (1/4 in log odds) and participant with maximum value of aptitude is at ceiling in producing the correct tone - i.e. 72/72 (since 72 test items) - but since we compute this in logodds use 71/72. Then to work out the change in tone-score for a 1-step change in aptitude we need to divide it by the length of the aptitude predictor (max value of aptitude for any participant - minimal value of aptitude for any participant).

length = max(NWR2$aptitude.ct) - min(NWR2$aptitude.ct)
length
[1] 6.5625
max_aptitude_wr = (logodds(71/72)- logodds(1/4))/length
max_aptitude_wr
[1] 0.8169588

We use this maximum value and the ACTUAL value of aptitude at pre-test to work out a maximum value for the interaction of interest, as follows:

“LV.PRE.aptitude” is the effect of aptitude at pre-test in LV “LV.POST.aptitude” is the effect of aptitude at post-test in LV “HV,PRE.aptitude” is the effect of aptitude at pre-test in HV “HV.POST.aptitude” is the effect of aptitude at post-test in HV

PRE.aptitude is the average value of aptitude at pre-test

max_aptitude is maximum value of aptitude (as computed above)

Assumptions for plausible maximum: - both conditions have the same value of aptitude at pre-test (estimated as PRE.aptitude) - in LV condition, effect of aptitude is the same at pre and post test - in HV condition, effect of aptitude is increased to the maximum amount at post test (=max_aptitude )

thus: session by aptitude by condition = (HV.POST.aptitude- HV.PRE.aptitude)-(LV.POST.aptitude- LV.PRE.aptitude) = (max_aptitude- PRE.aptitude)-(PRE.aptitude- PRE.aptitude) = max_aptitude- PRE.aptitude

since this is an estimate of the maximum value of condition, we halve it to get our estimate to use as the sd of the half normal:
sdtheory = (max_aptitude - PRE.aptidue)/2

contrast =   "Word Repetition, Tone score: aptitude by session by LV versus HV"
std.Error = round(summary(nw_glmer_pcpt)$coeff["condition2.ct:session.ct:aptitude.ct", "Std. Error"],3)
estimate = round(summary(nw_glmer_pcpt)$coeff["condition2.ct:session.ct:aptitude.ct","Estimate" ],3)
h1_sd =  round((max_aptitude_wr - summary(nw_glmer_pcpt_PRE)$coeff["aptitude.ct", "Estimate" ])/2,3)
              
tails = 1
WR_ap_table = data.frame(cbind(contrast, std.Error, estimate, h1_sd, tails))  
kable(WR_ap_table)
contrast std.Error estimate h1_sd tails
Word Repetition, Tone score: aptitude by session by LV versus HV 0.082 0.091 0.379 1

3 Interval oddity

Run factor condition2 levels LV versus HV+HVB

dis2$condition2= dis2$condition
dis2$condition2[dis2$condition=="HVB"]= "HV"
dis2 = lizCenter(dis2, list("session", "condition2"))
d_glmer_pcpt = glmer(score ~ session.ct * wordtype.ct * condition2.ct
                     + (Neutral_VERSUS_Easy + Neutral_VERSUS_Hard): session.ct 
                     + (Neutral_VERSUS_Easy + Neutral_VERSUS_Hard) 
                     + aptitude.ct
                     + aptitude.ct: session.ct
                     + aptitude.ct :condition2.ct : session.ct
                     + aptitude.ct : condition2.ct : session.ct : wordtype.ct
                     + (session.ct * wordtype.ct||subject)
                     , family = "binomial", 
                     control = glmerControl(optimizer = "bobyqa"), 
                     data = dis2)
kable(round(summary(d_glmer_pcpt)$coeff,3))
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.549 0.057 9.685 0.000
session.ct 0.302 0.046 6.590 0.000
wordtype.ct -0.246 0.046 -5.375 0.000
condition2.ct 0.071 0.120 0.595 0.552
Neutral_VERSUS_Easy 0.266 0.056 4.715 0.000
Neutral_VERSUS_Hard -0.080 0.055 -1.449 0.147
aptitude.ct 0.071 0.030 2.354 0.019
session.ct:wordtype.ct 0.141 0.091 1.546 0.122
session.ct:condition2.ct -0.005 0.097 -0.050 0.960
wordtype.ct:condition2.ct -0.015 0.097 -0.154 0.878
session.ct:Neutral_VERSUS_Easy -0.273 0.113 -2.426 0.015
session.ct:Neutral_VERSUS_Hard 0.118 0.111 1.067 0.286
session.ct:aptitude.ct 0.007 0.025 0.292 0.770
session.ct:wordtype.ct:condition2.ct 0.009 0.193 0.045 0.964
session.ct:condition2.ct:aptitude.ct 0.048 0.050 0.969 0.333
session.ct:wordtype.ct:condition2.ct:aptitude.ct -0.003 0.099 -0.034 0.973
dis3pre = subset(dis2, session == c("Pre-test"))
dis3pre = lizCenter(dis3pre, list("session", "condition2"))
d_glmer_pcpt_PRE = glmer(score ~wordtype.ct * condition2.ct
                     + (Neutral_VERSUS_Easy + Neutral_VERSUS_Hard) 
                     + aptitude.ct
                     + aptitude.ct : condition2.ct
                     + aptitude.ct : condition2.ct : wordtype.ct
                     + (wordtype.ct||subject)
                     , family = "binomial", 
                     control = glmerControl(optimizer = "bobyqa"), 
                     data = dis3pre)
kable(round(summary(d_glmer_pcpt_PRE)$coeff,3))
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.392 0.055 7.072 0.000
wordtype.ct -0.314 0.064 -4.945 0.000
condition2.ct 0.069 0.117 0.591 0.554
Neutral_VERSUS_Easy 0.400 0.079 5.086 0.000
Neutral_VERSUS_Hard -0.138 0.077 -1.807 0.071
aptitude.ct 0.066 0.030 2.213 0.027
wordtype.ct:condition2.ct -0.015 0.135 -0.112 0.911
condition2.ct:aptitude.ct -0.054 0.060 -0.898 0.369
wordtype.ct:condition2.ct:aptitude.ct 0.006 0.068 0.088 0.930

Model of the data is the estimate and st.error for session.ct:condition2.ct:aptitude.ct (session by aptitude by condition)

For our estimate of H1, need to work out a plausible maximum value of aptitude that we expect in any cell we get this from the scale.

We do this as follows: maximum possible value of aptitude (max_aptitude), working from the scale: Assume participant with minimal value of aptitude is at chance in picking odd one out (1/3 in log odds) and participant with maximum value of aptitude is at ceiling in producing the correct tone - i.e. 72/72 (since 72 test items) - but since we compute this in logodds use 71/72. Then to work out the change in tone-score for a 1-step change in aptitude we need to divide it by the length of the aptitude predictor (max value of aptitude for any participant - minimal value of aptitude for any participant).

length = max(dis2$aptitude.ct) - min(dis2$aptitude.ct)
length
[1] 6.5625
max_aptitude_d = (logodds(71/72)- logodds(1/3))/length
max_aptitude_d
[1] 0.7551736

We use this maximum value and the ACTUAL value of aptitude at pre-test to work out a maximum value for the interaction of interest, as follows:

“LV.PRE.aptitude” is the effect of aptitude at pre-test in LV “LV.POST.aptitude” is the effect of aptitude at post-test in LV “HV,PRE.aptitude” is the effect of aptitude at pre-test in HV “HV.POST.aptitude” is the effect of aptitude at post-test in HV

PRE.aptitude is the average value of aptitude at pre-test

max_aptitude is maximum value of aptitude (as computed above)

Assumptions for plausible maximum: - both conditions have the same value of aptitude at pre-test (estimated as PRE.aptitude) - in LV condition, effect of aptitude is the same at pre and post test - in HV condition, effect of aptitude is increased to the maximum amount at post test (=max_aptitude )

thus: session by aptitude by condition = (HV.POST.aptitude- HV.PRE.aptitude)-(LV.POST.aptitude- LV.PRE.aptitude) = (max_aptitude- PRE.aptitude)-(PRE.aptitude- PRE.aptitude) = max_aptitude- PRE.aptitude

since this is an estimate of the maximum value of condition, we halve it to get our estimate to use as the sd of the half normal:
sdtheory = (max_aptitude - PRE.aptidue)/2

contrast =       "3 Int. Odd., Tone score: aptitude by session by LV versus HV"
std.Error = round(summary(d_glmer_pcpt)$coeff["session.ct:condition2.ct:aptitude.ct", "Std. Error" ],3)
estimate =  round(summary(d_glmer_pcpt)$coeff["session.ct:condition2.ct:aptitude.ct","Estimate" ],3)
h1_sd =  round((max_aptitude_d - summary(d_glmer_pcpt_PRE)$coeff["aptitude.ct", "Estimate" ])/2,3)
tails = 1
discrim_ap_table = data.frame(cbind(contrast, std.Error, estimate, h1_sd, tails))  
kable(discrim_ap_table)
contrast std.Error estimate h1_sd tails
3 Int. Odd., Tone score: aptitude by session by LV versus HV 0.05 0.048 0.345 1

training data

train2 = lizContrasts(train2, train2$condition, "LV")
train2 = lizCenter(train2, list("session","aptitude"))
train2.noHVB = lizCenter(subset(train2, condition != "HVB"), list("session", "condition", "aptitude"))
train2.noHV = lizCenter(subset(train2, condition != "HV"), list("session", "condition", "aptitude"))
t_glmer_pcpt = glmer(score ~ (LV_VERSUS_HV + LV_VERSUS_HVB) * aptitude.ct 
                + (1|subject)
              , family = "binomial", 
              control = glmerControl(optimizer = "bobyqa"), 
              data = train2)
kable(round(summary(t_glmer_pcpt )$coeff,3))
Estimate Std. Error z value Pr(>|z|)
(Intercept) 1.823 0.084 21.615 0.000
LV_VERSUS_HV -1.033 0.187 -5.526 0.000
LV_VERSUS_HVB -0.506 0.196 -2.582 0.010
aptitude.ct 0.129 0.048 2.686 0.007
LV_VERSUS_HV:aptitude.ct -0.037 0.119 -0.309 0.757
LV_VERSUS_HVB:aptitude.ct 0.026 0.101 0.256 0.798

Model of the data is the estimate and st.error for condition2.ct:aptitude.ct (condition by aptitude)

Value to inform H1 (this is the same as for PI above):

Work out a plausible maximum - and use half this as the estimate - using the main effect of aptitude (= aptitude.ct - overall difference in performance on test for one step in aptitude (which is set to be 10% points on the pcpt test)).

if “LV.aptitude” is the effect of aptitude in the LV condition “HV.aptitude” is the effect of aptitude in the HV condition

Assumptions for plausible maximum: - positive value of aptitude in HV - NO effect of aptitude in LV (=0) [note: no reason to think would get NEGATIVE effect of aptitude in either condition, based on previous data]

thus: condition by aptitude = HV.aptitude-LV.aptitude = HV.aptitude-0

aptitude = (HV.aptitude+LV.aptitude)/2 = (HV.aptitude)/2

2 * aptitude = HV.aptitude

condition by aptitude = 2 * aptitude

since this is an estimate of the maximum value of condition, we halve it to get our estimate to use as the sd of the half normal:

sdtheory = aptitude
(which is aptitude.ct in the above model)

contrast =   c("Training: aptitude by LV versus HV-unblocked",
               "Training: aptitude by LV versus HV-blocked")
std.Error = c(round(summary(t_glmer_pcpt)$coeff["LV_VERSUS_HV:aptitude.ct", "Std. Error"],3),
       round(summary(t_glmer_pcpt)$coeff["LV_VERSUS_HVB:aptitude.ct", "Std. Error"],3))      
              
estimate = c(round(summary(t_glmer_pcpt)$coeff["LV_VERSUS_HV:aptitude.ct", "Estimate"],3),
       round(summary(t_glmer_pcpt)$coeff["LV_VERSUS_HVB:aptitude.ct", "Estimate"],3))  
h1_sd =  round(rep(summary(t_glmer_pcpt)$coeff["aptitude.ct","Estimate" ],2),3)
              
tails = c(1,1)
train_ap_table = data.frame(cbind(contrast, std.Error, estimate, h1_sd, tails)) 
kable(train_ap_table)
contrast std.Error estimate h1_sd tails
Training: aptitude by LV versus HV-unblocked 0.119 -0.037 0.129 1
Training: aptitude by LV versus HV-blocked 0.101 0.026 0.129 1

compute bFs and ranges

contrast std.Error estimate sdtheory h1 motivation tail Bf BFranges
PI_ap: LV versus HV by aptitude 0.127 0.006 0.171 1 0.617 ambiguous : From 0 To 0.3535 ; subst_null : From 0.404 To 1.2626 ; strong_null : From 1.3131 To 5 ;
PN_ap: LV versus HV by aptitude 0.083 0.042 0.099 1 0.904 ambiguous : From 0 To 0.3535 ; subst_null : From 0.404 To 1.2626 ; strong_null : From 1.3131 To 5 ;
3 Int. Odd., Tone score: aptitude by session by LV versus HV 0.05 0.048 0.345 1 0.371 ambiguous : From 0 To 0.3535 ; subst_null : From 0.404 To 1.2626 ; strong_null : From 1.3131 To 5 ;
Word Repetition, Tone score: aptitude by session by LV versus HV 0.082 0.091 0.379 1 0.654 ambiguous : From 0 To 0.7576 ; subst_null : From 0.8081 To 2.5758 ; strong_null : From 2.6263 To 5 ;
Training: aptitude by LV versus HV-unblocked 0.119 -0.037 0.129 1 0.572 ambiguous : From 0 To 0.2525 ; subst_null : From 0.303 To 0.9091 ; strong_null : From 0.9596 To 5 ;
Training: aptitude by LV versus HV-blocked 0.101 0.026 0.129 1 0.732 ambiguous : From 0 To 0.3535 ; subst_null : From 0.404 To 1.2626 ; strong_null : From 1.3131 To 5 ;

estimate sample increase

Above we seen that we do not have substantial evidence for the null (or H1) with our current sample. We use a function to see how many participants we might need to see substanatial evidence for the null, assuming that the standard error decreases pro-portional to route N


# note lines5-6 of df are the training data where we are looking at contrasts not using the same sample, so the N is less (40 rather than 60)
addBf_powercalc(df[1:4,], 60, 500)%>%
  kable() %>%
  kable_styling()

addBf_powercalc(df[5:6,], 40, 500)%>%
  kable() %>%
  kable_styling()
---
title: "Bayesian analysis supplement script"
output:
  html_notebook:
    toc: yes
    toc_depth: 4
  html_document:
    df_print: paged
    toc: yes
    toc_depth: '4'
---

# Packages and Helper functions

## Packages

```{r}
rm(list=ls())


suppressPackageStartupMessages(library(ggplot2))
suppressPackageStartupMessages(library(lmerTest))
suppressPackageStartupMessages(library(lme4))
suppressPackageStartupMessages(library(Matrix))
suppressPackageStartupMessages(library(gridExtra))
suppressPackageStartupMessages(library(plyr))
suppressPackageStartupMessages(library(knitr))
suppressPackageStartupMessages(library(irr))
suppressPackageStartupMessages(library(cowplot))
suppressPackageStartupMessages(theme_set(theme_bw()))
opts_chunk$set(fig.width=8, fig.height=5, echo=TRUE, warning=FALSE, message=FALSE, cache=TRUE)

suppressPackageStartupMessages(library("kableExtra"))

```

## Helper Functions
### logodds

This function takes a proportion and returns the log odds. 

```{r}
logodds <- function(p){log(p/(1-p))}  
```

### inverse_logodds

This function takes a log odds and returns the equivalent proportion.
```{r}
inverse_logodds <- function(lo) {exp(lo)/(1+exp(lo)) }  
```

### myCenter

This function outputs the centered values of a variable, which can be a numeric variable, a factor, or a data frame. It was taken from Florian Jaeger's blog - https://hlplab.wordpress.com/2009/04/27/centering-several-variables/.

From his blog:

- If the input is a numeric variable, the output is the centered variable

- If the input is a factor, the output is a numeric variable with centered factor level values. That is, the factor's levels are converted into numerical values in their inherent order (if not specified otherwise,  R defaults to alphanumerical order). More specifically, this centers any binary factor so that the value below 0 will be the 1st level of the original factor, and the value above 0 will be the 2nd level.

- If the input is a data frame or matrix, the output is a new matrix of the same dimension and with the centered values and column names that correspond to the colnames() of the input preceded by "c" (e.g. "Variable1" will be "cVariable1").

```{r}
myCenter = function(x) {
  if (is.numeric(x)) { return(x - mean(x, na.rm = T)) }
	if (is.factor(x)) {
		x= as.numeric(x)
		return(x - mean(x, na.rm = T))
	}
	if (is.data.frame(x) || is.matrix(x)) {
		m= matrix(nrow = nrow(x), ncol = ncol(x))
		colnames(m)= paste("c", colnames(x), sep = "")
	
		for (i in 1:ncol(x)) {
		
			m[,i] = myCenter(x[,i])
		}
		return(as.data.frame(m))
	}
}
```

### lizCenter

This function provides a wrapper around myCenter allowing you to center a specific list of variables from a data frame. 

- x: data frame
- listfname: a list of the variables to be centered (e.g. list(variable1, variable2))

The output is a copy of the data frame with a column (always a numeric variable) added for each of the centered variables. These columns are labelled with the each column's previous name, but with ".ct" appended (e.g., "variable1" will become "variable1.ct").


```{r}
lizCenter = function(x, listfname) 
{
	for (i in 1:length(listfname)) 
	{
		fname = as.character(listfname[i])
		x[paste(fname,".ct", sep="")] = myCenter(x[fname])
	}
		
	return(x)
}
```	

### lizContrasts

This function can be used to create two centered dummy variables which stand in place of a three-way factor (condition). This allows us to inspect each contrast separately, as well as their interactions with other factors. Other fixed effects in the model can be evaluated as the average effects across all levels of the factor.

The function takes a data frame (d), a factor from that database (condition), which must have three levels, and the name of the level of the factor which is to be used as the baseline for the contrasts (base level).

For example, if d is a data frame with a factor "condition" with three levels "("lex_skew", "lex_noskew", and  "mixed") then lizContrasts(d, d$condition, "lex_no_skew") returns a data frame with two (numeric) columns added labelled "lex_noskew_VERSUS_lex_mixed" and "lex_noskew_VERSUS_lex_skew". 

Wherever you would normally use "condition" in a formula in an LME, it can be replaced by (lex_noskew_VERSUS_lex_mixed + "lex_noskew_VERSUS_lex_skew) e.g. ~ (a * condition) becomes ~ (a *  (lex_noskew_VERSUS_lex_mixed + lex_noskew_VERSUS_lex_skew)).

```{r}
lizContrasts = function(d, condition, baselevel) 
{
  
	condition = factor(condition)
  condition = relevel(condition, baselevel)

	a = contrasts(condition)-apply(contrasts(condition),2,mean)
	d$dummy1[condition == rownames(a)[1]] <- a[1] 
	d$dummy1[condition == rownames(a)[2]] <- a[2] 
	d$dummy1[condition == rownames(a)[3]] <- a[3] 
	d$dummy2[condition == rownames(a)[1]] <- a[4] 
	d$dummy2[condition == rownames(a)[2]] <- a[5] 
	d$dummy2[condition == rownames(a)[3]] <- a[6] 

	name1 = paste(baselevel, rownames(a)[2],sep ="_VERSUS_")
	name2 = paste(baselevel, rownames(a)[3],sep ="_VERSUS_")

	d[name1] = d$dummy1	
	d[name2] = d$dummy2	

	d$dummy1 <-NULL	
	d$dummy2 <-NULL	
	
	return(d)
}
```  


### get_coeffs

This function allows us to inspect particular coefficients from the output of an LME model by putting them in a table.

- x: the output returned when running lmer or glmer (i.e. an object of type lmerMod or glmerMod)
- list: a list of the names of the coefficients to be extracted (e.g. c("variable1", "variable1:variable2")) 

```{r}
get_coeffs <- function(x,list){(as.data.frame(summary(x)$coefficients)[list,])}
``` 

### Bf

This function is equivalent to the Dienes (2008) calculator which can be found here: http://www.lifesci.sussex.ac.uk/home/Zoltan_Dienes/inference/Bayes.htm.

The code was provided by Baguely and Kayne (2010) and can be found here: http://www.academia.edu/427288/Review_of_Understanding_psychology_as_a_science_An_introduction_to_scientific_and_statistical_inference

```{r}
Bf <- function(sd, obtained, uniform, lower = 0, upper = 1, meanoftheory = 0, sdtheory = 1, tail = 1){
  area <- 0
  if(identical(uniform, 1)){
    theta <- lower
    range <- upper - lower
    incr <- range / 2000
    for (A in -1000:1000){
      theta <- theta + incr
      dist_theta <- 1 / range
      height <- dist_theta * dnorm(obtained, theta, sd)
      area <- area + height * incr
    }
  }else
  {theta <- meanoftheory - 5 * sdtheory
  incr <- sdtheory / 200
  for (A in -1000:1000){
    theta <- theta + incr
    dist_theta <- dnorm(theta, meanoftheory, sdtheory)
    if(identical(tail, 1)){
      if (theta <= 0){
        dist_theta <- 0
      } else {
        dist_theta <- dist_theta * 2
      }
    }
    height <- dist_theta * dnorm(obtained, theta, sd)
    area <- area + height * incr
  }
  }
  LikelihoodTheory <- area
  Likelihoodnull <- dnorm(obtained, 0, sd)
  BayesFactor <- LikelihoodTheory / Likelihoodnull
  ret <- list("LikelihoodTheory" = LikelihoodTheory,"Likelihoodnull" = Likelihoodnull, "BayesFactor" = BayesFactor)
  ret
} 
```



### Bf_table

This works by calling the Bf function above on a dataframe which has the following columns:
contrast
std.Error
estimate
h1_sd 
h1_motivation (some text)

It computes the BF for each row using std.Error and estimate as the model of the data, and h1 as a half normal with mean = 0 and sd = h1_sd and tails = tails.

The function allows for positive and negative h1 values. If sd_theory is negative, since the BF function requires a positive value for sdtheory, both sdtheory and obtained are multiplied by -1.


```{r}
#df
Bf_table<-function(df)
{
  Bfs = vector('double')
  estimates = vector('double')
  sterrors = vector('double')
  sdtheorys = vector('double')
  contrasts  = as.character(df$contrast)
  motivation = as.character(df$h1_motivation)
  
  df$tails = as.numeric(df$tails)
  for (i in 1:dim(df)[1]){
    #i=5
    sd_error = df$std.Error[i]
    obtained = df$estimate[i]
    stdtheory= df$h1_sd[i]
    tail = df$tails[i]
    
    if(df$h1_sd[i]<0) {
      stdtheory = h1_sd[i]*-1
      obtained = (df$estimate[i]*-1)
    }
    
    
    Bfs[i] = Bf( sd_error, obtained, uniform = 0, meanoftheory = 0, sdtheory=stdtheory , tail = tail)$BayesFactor
    estimates[i] = obtained
    sdtheorys[i] = stdtheory
    sterrors[i] = sd_error
  }
  
  df2 = data.frame(cbind(contrasts,  round(sterrors, 3), round(estimates,3), round(sdtheorys,3),    motivation, df$tails, round(Bfs,3)    )   )
  colnames(df2) = c("contrast", "std.Error", "estimate", "sdtheory",   "h1 motivation", "tail", "Bf" )
  
  return(df2)
  
  df2[i,]
  
}
```



### Bf_range

This works with the Bf function above. It requires the obtained mean and SE for the current sample and works out what the BF would be for a range of predicted means (which are set to be sdtheoryrange (with meanoftheory=0).
```{r}
Bf_range<-function(sd, obtained, meanoftheory=0, sdtheoryrange, tail=1)
{
  
  x = c(0)
  y = c(0)
  
  for(sdi in sdtheoryrange)
  {
    #sdi = sdtheoryrange[1]
    B = as.numeric(Bf(sd, obtained, meanoftheory=0, uniform = 0, sdtheory=sdi, tail)[3])
    
    #following line corrects for the fact that the calcuator does not correctly compute BF when sdtheory==0; this code ensures that if sdtheory ==0, BF=1
    
    if (sdi ==0 ) {B=1}
    
    x= append(x,sdi)  
    y= append(y,B)
    output = cbind(x,y)
    
  } 
  output = output[-1,] 
  colnames(output) = c("sdtheory", "BF")
  return(output) 
}
```



### addBF_powercalc

This function takes as its input the output of BF table above (BF_df), the number of participants in the current sample (N) and the maximum number of participants we would consider testing. It then works out what the minimal N would be to get a substantial BF, using the principle that the standard error is proportional to the square root of new-sample/ N (see Dienes video: https://www.youtube.com/watch?v=10Lsm_o_GRg "how many participants might I need"). It returns the table with an additional column saying how many participants would be needed to get substantial evidence for the null or H1, or that we still wouldn't have substantial evidence either way even with the maximum number of participants.

```{r}
addBf_powercalc <-function(Bf_df, N, max)
{
  Number = vector() 
  for (b in 1:dim(Bf_df)[1]){
    for(newN in N : max){
      
      BF=  as.numeric(Bf(sd=as.numeric(as.character(Bf_df$std.Error[b]))*sqrt(N/newN), obtained=as.numeric(as.character(Bf_df$estimate[b])), meanoftheory=0, uniform = 0, sdtheory=as.numeric(as.character(Bf_df$h1_sd[b])), tail=as.numeric(as.character(Bf_df$tail[b]))))[3]
      
      if(BF>=3) {
        Number[b] = paste ("evidence for H1 with", newN, "participants")
        break
      } else if (BF< (1/3)){
        Number[b] = paste ("evidence for H0 with", newN, "participants")
        break
      } else if(max== newN) { 
        Number[b] = paste ("evidence still ambiguous with", max, "participants")
      }
    }
        
  }
  df2 = cbind(Bf_df, Number)
  colnames(df2)[ncol(df2)]= "N needed"
  return (df2 )
}
      


```




### addBf_ranges

This function takes as its input a dataframe which is the output of the BF_table function. Each row provides the values which give the model of the data. It also requires a range of values to test as sd of h1. The function adds an additional column to this table which writes out the range of values within this range which give values which meet the criteria for: (i) strong evidence for null (BF < 1/10); substantial evidence for null (BF < 1/3); ambiguous (3 > BF > 1/3 ); substantial evidence for H1 (BF >= 10).


```{r}
addBf_ranges <-function(Bf_df, sdtheoryrange)
{
  
  BFranges = vector()  
  for (b in 1:dim(Bf_df)[1]){
    #b=1
  range = Bf_range(sd=as.numeric(as.character(Bf_df$std.Error[b])), obtained=as.numeric(as.character(Bf_df$estimate[b])), meanoftheory=0, sdtheoryrange=sdtheoryrange, tail=as.numeric(as.character(Bf_df$tail[b])))
  
  from_table = vector()
  to_table = vector()
  cat = vector()
  category_table = vector()
  
  for(i in 1:dim(range)[1]) {       # go through each value in the range and categorize it
  
  #range = read.csv("range.csv")
  #i=1
  
  #categorize current BF 
  if (range[i,2] <= (1/10)) { 
  cat[i] = "strong_null"      ## NOT below or equal to 1/10
  } else if (range[i,2] <= (1/3)) {
  cat[i] = "subst_null" ## NOT below or equal to 1/10, IS below or equal to 1/3 
  } else if (range[i,2] < 3) { ## NOT below or equal to 1/10, NOT below or equal to 1/3, IS below 3 
  cat[i] = "ambiguous"
  } else if (range[i,2] >= 10) {## NOT below or equal to 1/10, NOT below or equal to 1/3, NOT below 3, IS above or equal to 10 
  cat[i] = "strong_h1"
  } else {                ## NOT below or equal to 1/10, NOT below or equal to 1/3, NOT below 3, NOT above or equal to 10 
  cat[i] = "subst_h1"
  }
  
  # adjust the table
  j = length(category_table) 
  
  if (i==1){                      # first one
  category_table[j+1] = cat[i]   
  from_table[j+1] = range[i,1]
  
  } else if (cat[i] != cat[i-1]) { # NOT the first one, IS one where need to start new range 
  to_table[j] = range[i-1,1]
  category_table[j+1] = cat[i]
  from_table[j+1] = range[i,1]
  } 
  
  if (i==dim(range)[1]){        # if its the last one, finish off the table
  to_table[j] = range[i,1]
  }
  }
  
  
  # go through the little table and turn it int a string of ranges  
  string = ""
  for(i in 1: length(category_table)){
  string = paste(string, category_table[i], ": From" , round(from_table[i],4) , "To" , round(to_table[i],4) , ";")
  }
  
  BFranges[b] = string
  }
  out = cbind(Bf_df, BFranges)
  return(out)
}


```








# Load and set up data sets

```{r}
#setwd("datafiles")
#getwd()
pcpt <- read.csv("PCPT.csv")
dis <- read.csv("Discrimination.csv")
train <- read.csv("Training.csv")
PI <- read.csv("Picture Identification.csv")
PRO <- read.csv("Production_all.csv")
setwd("..")

#Set up aptitude measure- i.e. pcpt pre-test scores 
pcpt$condition = factor(pcpt$condition, levels = c("0", "1", "2"), labels = c("LV", "HV", "HVB"))
pcpt$session = factor(pcpt$session, levels = c("1", "2"), labels = c("Pre-test", "Post-test"))
ID_pre = with(subset(pcpt, session == "Pre-test"), aggregate(accuracy ~ subject + session + condition, FUN = mean))
ID_pre$session = NULL
ID_pre$aptitude = ID_pre$accuracy *10



# Set up date for PI 
PI$condition = factor(PI$condition, levels = c("0", "1", "2"), labels = c("LV", "HV", "HVB"))
PI$voicetype = factor(PI$voicetype, levels = c("nv1", "tv1"), labels = c("New voice", "Trained voice"))
PI2 = merge(ID_pre, PI)

# setting up production data
PRO$session = factor(PRO$session, levels = c("pretest", "posttest", "nativespeaker", "picturenaming"), labels = c("Pre-test", "Post-test","Native-Speaker", "Picture-Naming"))

#Get rid of participant48's production data and other unidentifiable trials with quality issues. 
PRO = PRO[!(PRO$subject=="48"),]
PRO = PRO[PRO$tone != 0, ]
PC = subset(PRO, session == "Native-Speaker")
CWR = subset(PRO, session %in% c("Pre-test", "Post-test"))
CPN = subset(PRO, session == "Picture-Naming")

## Set up data for Word Repetition
NWR = CWR
NWR$condition = factor(NWR$condition, levels = c("0", "1","2"), labels = c("LV", "HV", "HVB"))
NWR$wordtype = factor(NWR$wordtype, levels = c("trained", "untrained"), labels = c("Trained", "Untrained"))
NWR2 = merge(NWR, ID_pre)

## Set up data for Picture Naming 
NPN = CPN
NPN$condition = factor(NPN$condition, levels = c("0", "1", "2"), labels = c("LV", "HV", "HVB"))
NPN2 = merge(ID_pre, NPN)


# Set up date for 3IO
dis$condition = factor(dis$condition, levels = c("0", "1", "2"), labels = c("LV", "HV", "HVB"))
dis$session = factor(dis$session, levels = c("1", "2"), labels = c("Pre-test", "Post-test"))
dis$trialtype = dis$voicetype = factor(dis$voicetype, levels = c("fff", "ffm", "fmf"), labels = c("Neutral", "Easy", "Hard"))
dis$wordtype = factor(dis$wordtype, levels = c("newword", "oldword"), labels = c("Untrained Item", "Trained Item"))
dis2 = merge(ID_pre, dis)

#Set up date for Training
train$condition = factor(train$condition, levels = c("0","2","1"), labels = c("LV", "HVB", "HV"))
train2 = merge(ID_pre, train)
```







# Evidence for/ against hypothesis of greater generalization to novel voices/ in production after multiple voice training
## Notes

Key goal is to look for evidence for the hypothesis that participants show better generalization to novel voices, or in production, after HV or HVB training than after LV training.

Points:
1) To have maximum data (maximize the evidence), we look for evidence for greater learning in both HV/HVB than in LV. To get this, we code a binary contrast LV versus HV+HVB (centered). For the post-tests (Picture ID/Picture Naming) we are interested in the evidence for a main effect of this contrast. For the pre-post tests, we are interested in the interaction between this contrast and session.
2) In PI test only untrained voice test items are relevant since we don't predict better performance with HV for trained items.
3) In 3IO test and Word Repetition: we look at trained and untrained items combined (to maximize power and because it's all generalization away from trained *voice*)
4) In the production measures - Word Rep and Picture Naming - we are interested in both tone and pinyin. The latter is relevant given the findings of Barcroft and Sommers that there is more generalization in production after exposure to multiple voices. However for Word Rep, since we didn't see any overall improvement from pre- to post-, we don't look at whether there is evidence that they improve more in one condition that the other.
6) Following Diennes 2008, we model H1 using a half normal with a mean of 0 and an sd set to be an estimate of H1. This biases smaller values. We take our estimates of H1 using estimates from elsewhere within the data which inform the likely size of the effect. The method for informing each one is explained below. We take BF < 1/3 to be substantial evidence for the null and BF > 3 to be substantial evidence for H1; values in between are ambiguous. In addition to computing the BF for a particular value of H1 for each contrast, we also compute a "robustness region" for each Bayes factor which shows the range of values of H1 over which we have ambiguous/substantial-evidenceH1/ substantial-evidenceH0.

## Picture Identification 

Run lmer model with just novel voices data and factor condition2 levels LV versus HV+HVB

```{r}
PI2$condition2 = PI2$condition
PI2$condition2[PI2$condition=="HVB"]= "HV"
PI2.NewVoice = subset (PI2, voicetype == "New voice")
PI2.NewVoice= lizCenter(PI2.NewVoice, list("voicetype", "aptitude", "condition2"))

p_glmer = glmer(score ~  condition2.ct
                + (1|subject)
                , family = "binomial", 
                control = glmerControl(optimizer = "bobyqa"), 
                data = PI2.NewVoice)
kable(round(summary(p_glmer)$coeff,3))
```

Model of the data is the estimate and st.error for condition2.ct

Value to inform H1:

Work out a plausible maximum - and use half this as the estimate - using the intercept (grand mean).

- if "LV" is the logodds of the proportion correct in LV condition   
- "HV" is the logodds of the proportion correct in HV condtion

Assumptions for plausible maximum:   
- minimal performance in LV - i.e. at chance (note chance here is logodds of 50%, i.e. 0)
- above chance in HV, meaning that post-test drives all of the effect

thus:
condition
= (HV - chance) - (LV - chance)
= HV - chance - 0
= HV - 0
= HV

intercept
= (HV+LV))/2
= (HV+0))/2
= HV/2

2 * intercept = HV
condition =  2 *  intercept

since this is an estimate of the maximum value of condition, we halve it to get our estimate to use as the sd of the half normal:  

sdtheory = intercept  

values for Bayes factor calculation (computed below)

```{r}
contrast = "PI, NovelVoice: LV versus HV"
std.Error = round(summary(p_glmer)$coeff["condition2.ct", "Std. Error" ],3)
estimate = round(summary(p_glmer)$coeff["condition2.ct", "Estimate" ],3)
h1_sd = round(summary(p_glmer)$coeff["(Intercept)", "Estimate" ],3)
tails = 1
PI_table = data.frame(cbind(contrast, std.Error, estimate, h1_sd,tails))  
kable(PI_table)
```

## Picture Naming: tone accuracy

Run lmer model with factor condition2 levels LV versus HV+HVB

```{r}
NPN2$condition2 = NPN2$condition
NPN2$condition2[NPN2$condition=="HVB"]= "HV"

NPN2 = lizCenter(NPN2, list("condition2"))

NPN_glmer = glmer(tone_score ~ condition2.ct
                  + (1|subject)
                  , family = "binomial", 
                  control = glmerControl(optimizer = "bobyqa"), 
                  data = NPN2)
kable(round(summary(NPN_glmer)$coeff,3))
```

Model of the data is the estimate and st.error for condition2.ct
Value to inform H1:

Work out a plausible maximum - and use half this as the estimate - using the intercept (grand mean).

if "LV" is the logodds of the proportion correct in LV condition and "HV" is the logodds of the proportion correct in HV condition

Assumptions for plausible maximum: 
- chance production of tones in LV condition -  1/4 (i.e. if they were at chance in producing each of four tones) in logodds space
- above minimal performance in HV, meaning that HV drives all of the difference between the intercept and chance

thus:
condition
= (HV - chance) - (LV - chance)
= HV - logodds(1/4) - 0
= HV - logodds(1/4)

intercept
= (HV+LV)/2
2 *intercept = HV + LV
= HV + logodds(1/4)
HV = 2 * intercept - logodds(1/4)

subbing in for HV:

condition = (2 * intercept - logodds(1/4)) - logodds(1/4)
= 2*(intercept - logodds(1/4))  

since this is an estimate of the maximum value of condition, we halve it to get our estimate to use as the sd of the half normal:  

sdtheory = intercept - logodds(1/4)  

```{r}
contrast = "PN: LV versus HV"
std.Error = round(summary(NPN_glmer)$coeff["condition2.ct", "Std. Error" ],3)
estimate = round(summary(NPN_glmer)$coeff["condition2.ct", "Estimate" ],3)
h1_sd = round(summary(NPN_glmer)$coeff["(Intercept)", "Estimate" ]- logodds(1/4),3)
tails = 1
PN_table = data.frame(cbind(contrast, std.Error, estimate, h1_sd,tails))  
kable(PN_table)
```


## Picture Naming: pinyin accuracy
Run lmer model with factor condition2 levels LV versus HV+HVB

```{r}
NPN2$condition2 = NPN2$condition
NPN2$condition2[NPN2$condition=="HVB"]= "HV"
#table(NPN2$condition2, NPN2$condition)
NPN2 = lizCenter(NPN2, list("condition2"))

NPN_pinyin_glmer = glmer(pinyin_score ~ condition2.ct
                  + (1|subject)
                  , family = "binomial", 
                  control = glmerControl(optimizer = "bobyqa"), 
                  data = NPN2)
kable(round(summary(NPN_pinyin_glmer)$coeff,3))
```


Model of the data is the estimate and st.error for condition2.ct
Value to inform H1:

Work out a plausible maximum - and use half this as the estimate - using the intercept (grand mean).

if "LV" is the logodds of the proportion correct in LV condition and "HV" is the logodds of the proportion correct in HV condition

Assumptions for plausible maximum: 
- minimal performance in LV - this is actually 0% but we can't compute this in logodds space. We therefore set this to be equivalent to one correct (i.e. 1/72) in logodds space
- above minimal performance in HV, meaning that HV drives all of the difference between the intercept and the minimum

thus:
condition
= (HV - minimum) - (LV - minimum)
= HV - logodds(1/72) - 0
= HV - logodds(1/72)

intercept
= (HV+LV)/2 
2 * intercept = HV+LV
= HV + logodds(1/72)
HV = 2 * intercept - logodds(1/72)

subbing in for HV:

condition = (2 * intercept - logodds(1/72)) - logodds(1/72)
= 2*(intercept - logodds(1/72))  

since this is an estimate of the maximum value of condition, we halve it to get our estimate to use as the sd of the half normal:  

sdtheory = intercept - logodds(1/72)  

```{r}
contrast = "PN, pinyin: LV versus HV"
std.Error = round(summary(NPN_pinyin_glmer)$coeff["condition2.ct", "Std. Error" ],3)
estimate = round(summary(NPN_pinyin_glmer)$coeff["condition2.ct", "Estimate" ],3)
h1_sd = round(summary(NPN_pinyin_glmer)$coeff["(Intercept)", "Estimate" ]- logodds(1/72),3)
tails = 1
PN_pinyin_table = data.frame(cbind(contrast, std.Error, estimate, h1_sd,tails))  
kable(PN_pinyin_table)
```



## Word Repetition: tone accuracy

Run lmer model with factor condition2 levels LV versus HV+HVB


```{r}
NWR2$condition2 = NWR2$condition
NWR2$condition2[NWR2$condition == "HVB"]= "HV"
NWR2 = lizCenter(NWR2, list("wordtype","condition2"))

nw_glmer = glmer(tone_score ~ wordtype.ct * condition2.ct * session
              + (wordtype.ct * session||subject)
              , family="binomial", 
              control = glmerControl(optimizer = "bobyqa"), 
              data = NWR2)

kable(round(summary(nw_glmer)$coeff,3))



```


Model of the data is the estimate and st.error for condition2.ct:sessionPost-test (condition by session)

Value to inform H1:

Work out a plausible maximum - and use half this as the estimate - using the main effect of session (overall difference between pre and post) .

if 
"LV.POST" is the logodds of the proportion correct in LV condition at posttest
"LV.PRE" is the logodds of the proportion correct in LV condition at pretest
"HV.POST" is the logodds of the proportion correct in HV condition at posttest
"HV.PRE" is the logodds of the proportion correct in HV condition at pretest

Assumptions for plausible maximum: 
- improvement from pre to post in HV 
- NO improvement from pre to post in LV i.e. LV.POST=LV.PRE, meaning all of the effect of session comes from the HV condition 

thus:
condition by session
= (HV.POST - HV.PRE) - (LV.POST- LV.PRE)
= (HV.POST - HV.PRE)

session  
= ((HV.POST - HV.PRE) + (LV.POST- LV.PRE))/2
= (HV.POST - HV.PRE)/2
  
2 * session
= (HV.POST - HV.PRE)  

condition by session =  2 * session  

since this is an estimate of the maximum value of condition, we halve it to get our estimate to use as the sd of the half normal:  

sdtheory = session    
(which is sessionPost-test in the above model)

Tone_accuracy
```{r}
contrast =  "Word Repetition, Tone score: LV versus HV by session"
std.Error = round(summary(nw_glmer)$coeff["condition2.ct:sessionPost-test", "Std. Error" ],3)
estimate = round(summary(nw_glmer)$coeff["condition2.ct:sessionPost-test", "Estimate" ],3)

h1_sd = round(summary(nw_glmer)$coeff["sessionPost-test", "Estimate" ],3)
tails = 1
wordrep_table = data.frame(cbind(contrast, std.Error, estimate, h1_sd,tails))  
kable(wordrep_table)
```

## Word Repetition: pinyin accuracy

Run lmer model with factor condition2 levels LV versus HV+HVB


```{r}

nw_glmer_pinyin = glmer(pinyin_score ~ wordtype.ct * condition2.ct * session
              + (wordtype.ct * session||subject)
              , family="binomial", 
              control = glmerControl(optimizer = "bobyqa"), 
              data = NWR2)

kable(round(summary(nw_glmer_pinyin)$coeff,3))



```


Model of the data is the estimate and st.error for condition2.ct:sessionPost-test (condition by session)

Value to inform H1:

Work out a plausible maximum - and use half this as the estimate - using the main effect of session (overall difference between pre and post) .

if 
"LV.POST" is the logodds of the proportion correct in LV condition at posttest
"LV.PRE" is the logodds of the proportion correct in LV condition at pretest
"HV.POST" is the logodds of the proportion correct in HV condition at posttest
"HV.PRE" is the logodds of the proportion correct in HV condition at pretest

Assumptions for plausible maximum: 
- improvement from pre to post in HV 
- NO improvement from pre to post in LV i.e. LV.POST=LV.PRE, meaning all of the effect of session comes from the HV condition 

thus:
condition by session
= (HV.POST - HV.PRE) - (LV.POST- LV.PRE)
= (HV.POST - HV.PRE)

session  
= ((HV.POST - HV.PRE) + (LV.POST- LV.PRE))/2
= (HV.POST - HV.PRE)/2
  
2 * session
= (HV.POST - HV.PRE)  

condition by session =  2 * session  

since this is an estimate of the maximum value of condition, we halve it to get our estimate to use as the sd of the half normal:  

sdtheory = session    
(which is sessionPost-test in the above model)


```{r}
contrast =  "Word Repetition, Pinyin score: LV versus HV by session"
std.Error = round(summary(nw_glmer_pinyin)$coeff["condition2.ct:sessionPost-test", "Std. Error" ],3)
estimate = round(summary(nw_glmer_pinyin)$coeff["condition2.ct:sessionPost-test", "Estimate" ],3)

h1_sd = round(summary(nw_glmer_pinyin)$coeff["sessionPost-test", "Estimate" ],3)
tails = 1
wordrep_table_pinyin = data.frame(cbind(contrast, std.Error, estimate, h1_sd,tails))  
kable(wordrep_table_pinyin)
```
## Three-interval Oddity Discrimination Task 
Run lmer model with factor condition2 levels LV versus HV+HVB

```{r}
dis2$condition2 = dis2$condition
dis2$condition2[dis2$condition == "HVB"]= "HV"
dis2 = lizCenter(dis2, list( "wordtype", "aptitude","condition2"))
dis2 = lizContrasts(dis2, dis2$condition, "LV")
dis2 = lizContrasts(dis2, dis2$trialtype, "Neutral")

d_glmer = glmer(score ~ session * wordtype.ct *  condition2.ct
                + (Neutral_VERSUS_Easy + Neutral_VERSUS_Hard): session
                + (Neutral_VERSUS_Easy + Neutral_VERSUS_Hard) 
                + (session * wordtype.ct||subject)
                , family = "binomial", 
                control = glmerControl(optimizer = "bobyqa"), 
                data = dis2)


kable(round(summary(d_glmer)$coeff,3))

```

Model of the data is the estimate and st.error for sessionPost-test:condition2.ct (condition by session)

Value to inform H1 (note: this is identical to wordrep above):

Work out a plausible maximum - and use half this as the estimate - using the main effect of session (overall difference between pre and post) .

if 
"LV.POST" is the logodds of the proportion correct in LV condition at posttest
"LV.PRE" is the logodds of the proportion correct in LV condition at pretest
"HV.POST" is the logodds of the proportion correct in HV condition at posttest
"HV.PRE" is the logodds of the proportion correct in HV condition at pretest

Assumptions for plausible maximum: 
- improvement from pre to post in HV 
- NO improvement from pre to post in LV i.e. LV.POST=LV.PRE, meaning all of the effect of session comes from the HV condition 

thus:
condition by session
= (HV.POST - HV.PRE) - (LV.POST- LV.PRE)
= (HV.POST - HV.PRE)

session  
= ((HV.POST - HV.PRE) + (LV.POST- LV.PRE))/2
= (HV.POST - HV.PRE)/2
  
2 * session
= (HV.POST - HV.PRE)  

condition by session =  2 * session  

since this is an estimate of the maximum value of condition, we halve it to get our estimate to use as the sd of the half normal:  

sdtheory = session    
(which is sessionPost-test in the above model)


```{r}
contrast = "3 Int. Odd, Tone score: LV by HV by session"
std.Error = round(summary(d_glmer)$coeff["sessionPost-test:condition2.ct", "Std. Error" ],3)
estimate = round(summary(d_glmer)$coeff["sessionPost-test:condition2.ct", "Estimate" ],3)
h1_sd = round(summary(d_glmer)$coeff["sessionPost-test", "Estimate" ],3)
tails = 1
discrim_table = data.frame(cbind(contrast, std.Error, estimate, h1_sd,tails))  
kable(discrim_table)
```



## compute bFs and ranges

```{r}
#df = rbind(wordrep_table,discrim_table, wordrep_ap_table, discrim_ap_table)
df= rbind(PI_table, PN_table, PN_pinyin_table, wordrep_table, wordrep_table_pinyin, discrim_table)

df$std.Error = as.numeric(as.character(df$std.Error))
df$estimate = as.numeric(as.character(df$estimate))
df$h1_sd = as.numeric(as.character(df$h1_sd))
df$tails = as.numeric(as.character(df$tails))
df$h1_motivation = ""

df2 = Bf_table(df)

#kable(df2)

#kable(addBf_ranges2(df2))

sdtheoryrange = seq( from = 0, to = 10 , length.out = 100)

addBf_ranges(df2,sdtheoryrange ) %>%
  kable() %>%
  kable_styling()

```

We have substantial evidence for the null in each test apart from Word Rep pinyin.  Note that robustness regions indicate that we would have also have obtained substantial BFs using smaller values than those we used. Note that for evidence for H0, the maximum scale factor is always infinity.


# Evidence for/against hypothesis that participants with higher aptitude benefit more from HV training 

## Notes:

Points:
1) Unlike above, we also look at training data here. This is because in Sadakata & McQueen the interaction was found with trained items, so it isn't just generalization trials that are relevant.
2) To have maximum data, for all of the tasks except training, we do what we did above and look for evidence for greater learning in both HV/HVB than in LV, i.e. we code a binary contrast LV versus HV+HVB (centered). For training, we saw in the previous analysis that HV and HVB are quite different, so here we look at LV v HV and LV v HVB separately (also note we have a lot more data points per participant for training). 
3) For the post-tests (PI/PN) we are interested in the evidence for an interaction between the condition contrast and aptitude. For the pre-post tests, we are interested in the interaction between thecontrast, aptitude and session. For training we look at the evidence for an interaction between each condition contrast and aptitude. (NOTE: interaction with session would also be relevant but the model won't converge with that in - even if we remove all of the slopes.)
4) For PI test, to have maximum power, we look at combination of trained and untrained voices (note: this is unlike what we did above, where we only looked at untrained voices. However, for the interaction with aptitude, one previous study found evidence with trained voices, and one with untrained, so we look across both).
5) In 3IO and Word Repetition test: we look at trained and untrained items combined. 
6) In the production measures - WR and PN - we are only interested in tone since our aptitude measure is relevant for tone. 

7) As above, following Dienes 2008, we model H1 using a half normal with a mean of 0 and an sd set to be an estimate of H1. This biases smaller values. We take our estimates of H1 using estimates from elsewhere within the data which inform the likely size of the effect. The method for informing each one is explained below. We take BF < 1/3 to be substnatial evidence for  the null and BF > 3 to be substantial evidence for H1; values in between are ambiguous. In addition to computing the BF for a particular value of H1 for each contrast, we also compute a "robustness region" for each Bayes factor which shows the range of values of H1 over which we have ambiguous/substantial-evidenceH1/ substantial-evidenceH0)

## Picture Identification
Run lmer model factor condition2 levels LV versus HV+HVB


```{r}
PI2$condition2 = PI2$condition
PI2$condition2[PI2$condition=="HVB"]= "HV"
PI2 = lizCenter(PI2, list("voicetype", "aptitude", "condition2"))

p_ap_glmer = glmer(score ~  condition2.ct *aptitude.ct
                + (1|subject)
                , family = "binomial", 
                control = glmerControl(optimizer = "bobyqa"), 
                data = PI2)
kable(round(summary(p_ap_glmer)$coeff,3))
```

Model of the data is the estimate and st.error for condition2.ct:aptitude.ct  (condition by aptitude)

Value to inform H1:

Work out a plausible maximum - and use half this as the estimate - using the main effect of aptitude  (= aptitude.ct - overall difference in performance on test for one step in aptitude (which is set to be 10% points on the pcpt test)).

if 
"LV.aptitude" is the effect of aptitude in the LV condition 
"HV.aptitude" is the effect of aptitude in the HV condition 

Assumptions for plausible maximum: 
- positive value of aptitudein HV 
- NO effect of aptitude in LV (=0) [note: no reason to think we would get NEGATIVE effect of aptitude in either condition, based on previous data]

thus:
condition by aptitude
= HV.aptitude-LV.aptitude
= HV.aptitude-0

aptitude
= (HV.aptitude+LV.aptitude)/2
= (HV.aptitude)/2

2 * aptitude
= HV.aptitude-0

condition by aptitude =  2 * aptitude  

since this is an estimate of the maximum value of condition, we halve it to get our estimate to use as the sd of the half normal:  

sdtheory = aptitude  
(which is aptitude.ct in the above model)

```{r}
contrast = "PI_ap: LV versus HV by aptitude"
std.Error = round(summary(p_ap_glmer)$coeff["condition2.ct:aptitude.ct", "Std. Error" ],3)
estimate = round(summary(p_ap_glmer)$coeff["condition2.ct:aptitude.ct", "Estimate" ],3)
h1_sd = round(summary(p_ap_glmer)$coeff["aptitude.ct", "Estimate" ],3)
tails = 1
PI_ap_table = data.frame(cbind(contrast, std.Error, estimate, h1_sd,tails))  
kable(PI_ap_table)
```

## Picture Naming

Run lmer model with factor condition2 levels LV versus HV+HVB

```{r}
NPN2$condition2 = NPN2$condition
NPN2$condition2[NPN2$condition=="HVB"]= "HV"
NPN2 = lizCenter(NPN2, list("condition2","aptitude"))

NPN_ap_glmer = glmer(tone_score ~ condition2.ct*aptitude.ct
                  + (1|subject)
                  , family = "binomial", 
                  control = glmerControl(optimizer = "bobyqa"), 
                  data = NPN2)
kable(round(summary(NPN_ap_glmer)$coeff,3))
```


Model of the data is the estimate and st.error for condition2.ct:aptitude.ct  (condition by aptitude)

Value to inform H1:

Work out a plausible maximum - and use half this as the estimate - using the main effect of aptitude  (= aptitude.ct - overall difference in performance on test for one step in aptitude (which is set to be 10% points on the pcpt test)).

if 
"LV.aptitude" is the effect of aptitude in the LV condition 
"HV.aptitude" is the effect of aptitude in the HV condition 

Assumptions for plausible maximum: 
- positive value of aptitude in HV 
- NO effect of aptitude in LV (=0) [note: no reason to think would get NEGATIVE effect of aptitude in either condition, based on previous data]

thus:
condition by aptitude
= HV.aptitude-LV.aptitude
= HV.aptitude-0

aptitude
= (HV.aptitude+LV.aptitude)/2
= (HV.aptitude)/2

2 * aptitude
= HV.aptitude-0

condition by aptitude =  2 * aptitude  

since this is an estimate of the maximum value of condition, we halve it to get our estimate to use as the sd of the half normal:  

sdtheory = aptitude  
(which is aptitude.ct in the above model)


```{r}
contrast = "PN_ap: LV versus HV by aptitude"
std.Error = round(summary(NPN_ap_glmer)$coeff["condition2.ct:aptitude.ct", "Std. Error" ],3)
estimate = round(summary(NPN_ap_glmer)$coeff["condition2.ct:aptitude.ct", "Estimate" ],3)
h1_sd = round(summary(NPN_ap_glmer)$coeff["aptitude.ct", "Estimate" ],3)
tails = 1
PN_ap_table = data.frame(cbind(contrast, std.Error, estimate, h1_sd,tails))  
kable(PN_ap_table)
```




## Word Repetition 

Run lmer model with factor condition2 levels LV versus HV+HVB.
Also run second model which is used just to get overall value of aptitude at pre-test

```{r}

NWR2$condition2 = NWR2$condition
NWR2$condition2[NWR2$condition == "HVB"]= "HV"

NWR2 = lizCenter(NWR2, list("session", "aptitude",  "wordtype","condition2"))
NWR2 = lizContrasts(NWR2, NWR2$condition, "LV")

nw_glmer_pcpt = glmer(tone_score ~ wordtype.ct * condition2.ct * session.ct 
              + aptitude.ct
              + aptitude.ct : session.ct
              + aptitude.ct : condition2.ct: session.ct 
              + aptitude.ct : condition2.ct: session.ct : wordtype.ct
              + (wordtype.ct * session.ct||subject)
              , family="binomial" , 
              control = glmerControl(optimizer = "bobyqa"), 
              data = NWR2)
kable(round(summary(nw_glmer_pcpt)$coeff,3))
```

```{r}
NWR3pre = subset(NWR2, session== "Pre-test")
NWR3pre = lizCenter(NWR3pre, list("session", "aptitude",  "wordtype","condition"))
NWR3pre = lizContrasts(NWR3pre, NWR3pre$condition, "LV")

nw_glmer_pcpt_PRE = glmer(tone_score ~ wordtype.ct * condition2.ct
              + aptitude.ct
                          + aptitude.ct :condition2.ct
              + aptitude.ct : condition2.ct : wordtype.ct
              + (wordtype.ct ||subject)
              , family="binomial" , 
              control = glmerControl(optimizer = "bobyqa"), 
              data = NWR3pre)
kable(round(summary(nw_glmer_pcpt_PRE)$coeff,3))

```


Model of the data is the estimate and st.error for session.ct:condition2.ct:aptitude.ct (session by aptitude by condition)

For our estimate of H1, need to work out a plausible maximum value of aptitude that we expect in any cell - we get this from the scale. 

We do this as follows:
maximum possible value of aptitude (max_aptitude), working from the scale: Assume participant with minimal value of aptitude is at chance in producing one of the four tones (1/4 in log odds) and participant with maximum value of aptitude is at ceiling in producing the correct tone - i.e. 72/72 (since 72 test items) - but since we compute this in logodds use 71/72. Then to work out the change in tone-score for a 1-step change in aptitude we need to divide it by the length of the aptitude predictor (max value of aptitude for any participant - minimal value of aptitude for any participant).


```{r}
length = max(NWR2$aptitude.ct) - min(NWR2$aptitude.ct)
length
max_aptitude_wr = (logodds(71/72)- logodds(1/4))/length
max_aptitude_wr
```



We use this maximum value and the ACTUAL value of aptitude at pre-test to work out a maximum value for the interaction of interest, as follows:  

"LV.PRE.aptitude" is the effect of aptitude at pre-test in LV
"LV.POST.aptitude" is the effect of aptitude at post-test in LV
"HV,PRE.aptitude" is the effect of aptitude at pre-test in HV
"HV.POST.aptitude" is the effect of aptitude at post-test in HV

PRE.aptitude is the average value of aptitude at pre-test

max_aptitude is maximum value of aptitude (as computed above)

Assumptions for plausible maximum: 
- both conditions have the same value of aptitude at pre-test (estimated as PRE.aptitude)
- in LV condition, effect of aptitude is the same at pre and post test
- in HV condition, effect of aptitude is increased to the maximum amount at post test (=max_aptitude )

thus:
session by aptitude by condition
= (HV.POST.aptitude- HV.PRE.aptitude)-(LV.POST.aptitude- LV.PRE.aptitude)
= (max_aptitude- PRE.aptitude)-(PRE.aptitude- PRE.aptitude)
= max_aptitude- PRE.aptitude

since this is an estimate of the maximum value of condition, we halve it to get our estimate to use as the sd of the half normal:  
sdtheory = (max_aptitude - PRE.aptidue)/2


```{r}
contrast =   "Word Repetition, Tone score: aptitude by session by LV versus HV"
std.Error = round(summary(nw_glmer_pcpt)$coeff["condition2.ct:session.ct:aptitude.ct", "Std. Error"],3)

estimate = round(summary(nw_glmer_pcpt)$coeff["condition2.ct:session.ct:aptitude.ct","Estimate" ],3)

h1_sd =  round((max_aptitude_wr - summary(nw_glmer_pcpt_PRE)$coeff["aptitude.ct", "Estimate" ])/2,3)
              
tails = 1
WR_ap_table = data.frame(cbind(contrast, std.Error, estimate, h1_sd, tails))  
kable(WR_ap_table)
```












## 3 Interval oddity
Run factor condition2 levels LV versus HV+HVB
```{r}
dis2$condition2= dis2$condition
dis2$condition2[dis2$condition=="HVB"]= "HV"
dis2 = lizCenter(dis2, list("session", "condition2"))

d_glmer_pcpt = glmer(score ~ session.ct * wordtype.ct * condition2.ct
                     + (Neutral_VERSUS_Easy + Neutral_VERSUS_Hard): session.ct 
                     + (Neutral_VERSUS_Easy + Neutral_VERSUS_Hard) 
                     + aptitude.ct
                     + aptitude.ct: session.ct
                     + aptitude.ct :condition2.ct : session.ct
                     + aptitude.ct : condition2.ct : session.ct : wordtype.ct
                     + (session.ct * wordtype.ct||subject)
                     , family = "binomial", 
                     control = glmerControl(optimizer = "bobyqa"), 
                     data = dis2)
kable(round(summary(d_glmer_pcpt)$coeff,3))
```

```{r}
dis3pre = subset(dis2, session == c("Pre-test"))
dis3pre = lizCenter(dis3pre, list("session", "condition2"))

d_glmer_pcpt_PRE = glmer(score ~wordtype.ct * condition2.ct
                     + (Neutral_VERSUS_Easy + Neutral_VERSUS_Hard) 
                     + aptitude.ct
                     + aptitude.ct : condition2.ct
                     + aptitude.ct : condition2.ct : wordtype.ct
                     + (wordtype.ct||subject)
                     , family = "binomial", 
                     control = glmerControl(optimizer = "bobyqa"), 
                     data = dis3pre)


kable(round(summary(d_glmer_pcpt_PRE)$coeff,3))


```

Model of the data is the estimate and st.error for session.ct:condition2.ct:aptitude.ct (session by aptitude by condition)

For our estimate of H1, need to work out a plausible maximum value of aptitude that we expect in any cell we get this from the scale. 

We do this as follows:
maximum possible value of aptitude (max_aptitude), working from the scale: Assume participant with minimal value of aptitude is at chance in picking odd one out (1/3 in log odds) and participant with maximum value of aptitude is at ceiling in producing the correct tone - i.e. 72/72 (since 72 test items) - but since we compute this in logodds use 71/72. Then to work out the change in tone-score for a 1-step change in aptitude we need to divide it by the length of the aptitude predictor (max value of aptitude for any participant - minimal value of aptitude for any participant).



```{r}
length = max(dis2$aptitude.ct) - min(dis2$aptitude.ct)
length
max_aptitude_d = (logodds(71/72)- logodds(1/3))/length
max_aptitude_d
```


We use this maximum value and the ACTUAL value of aptitude at pre-test to work out a maximum value for the interaction of interest, as follows:  

"LV.PRE.aptitude" is the effect of aptitude at pre-test in LV
"LV.POST.aptitude" is the effect of aptitude at post-test in LV
"HV,PRE.aptitude" is the effect of aptitude at pre-test in HV
"HV.POST.aptitude" is the effect of aptitude at post-test in HV

PRE.aptitude is the average value of aptitude at pre-test

max_aptitude is maximum value of aptitude (as computed above)

Assumptions for plausible maximum: 
- both conditions have the same value of aptitude at pre-test (estimated as PRE.aptitude)
- in LV condition, effect of aptitude is the same at pre and post test
- in HV condition, effect of aptitude is increased to the maximum amount at post test (=max_aptitude )

thus:
session by aptitude by condition
= (HV.POST.aptitude- HV.PRE.aptitude)-(LV.POST.aptitude- LV.PRE.aptitude)
= (max_aptitude- PRE.aptitude)-(PRE.aptitude- PRE.aptitude)
= max_aptitude- PRE.aptitude

since this is an estimate of the maximum value of condition, we halve it to get our estimate to use as the sd of the half normal:  
sdtheory = (max_aptitude - PRE.aptidue)/2



```{r}
contrast =       "3 Int. Odd., Tone score: aptitude by session by LV versus HV"

std.Error = round(summary(d_glmer_pcpt)$coeff["session.ct:condition2.ct:aptitude.ct", "Std. Error" ],3)

estimate =  round(summary(d_glmer_pcpt)$coeff["session.ct:condition2.ct:aptitude.ct","Estimate" ],3)
h1_sd =  round((max_aptitude_d - summary(d_glmer_pcpt_PRE)$coeff["aptitude.ct", "Estimate" ])/2,3)
tails = 1
discrim_ap_table = data.frame(cbind(contrast, std.Error, estimate, h1_sd, tails))  
kable(discrim_ap_table)
```

## training data

```{r}

train2 = lizContrasts(train2, train2$condition, "LV")
train2 = lizCenter(train2, list("session","aptitude"))
train2.noHVB = lizCenter(subset(train2, condition != "HVB"), list("session", "condition", "aptitude"))
train2.noHV = lizCenter(subset(train2, condition != "HV"), list("session", "condition", "aptitude"))

t_glmer_pcpt = glmer(score ~ (LV_VERSUS_HV + LV_VERSUS_HVB) * aptitude.ct 
                + (1|subject)
              , family = "binomial", 
              control = glmerControl(optimizer = "bobyqa"), 
              data = train2)

kable(round(summary(t_glmer_pcpt )$coeff,3))

```


Model of the data is the estimate and st.error for condition2.ct:aptitude.ct (condition by aptitude)

Value to inform H1 (this is the same as for PI above):

Work out a plausible maximum - and use half this as the estimate - using the main effect of aptitude  (= aptitude.ct - overall difference in performance on test for one step in aptitude (which is set to be 10% points on the pcpt test)).

if 
"LV.aptitude" is the effect of aptitude in the LV condition 
"HV.aptitude" is the effect of aptitude in the HV condition 

Assumptions for plausible maximum: 
- positive value of aptitude in HV 
- NO effect of aptitude in LV (=0) [note: no reason to think would get NEGATIVE effect of aptitude in either condition, based on previous data]

thus:
condition by aptitude
= HV.aptitude-LV.aptitude
= HV.aptitude-0

aptitude
= (HV.aptitude+LV.aptitude)/2
= (HV.aptitude)/2

2 * aptitude
= HV.aptitude

condition by aptitude =  2 * aptitude  

since this is an estimate of the maximum value of condition, we halve it to get our estimate to use as the sd of the half normal:  

sdtheory = aptitude  
(which is aptitude.ct in the above model)


```{r}
contrast =   c("Training: aptitude by LV versus HV-unblocked",
               "Training: aptitude by LV versus HV-blocked")

std.Error = c(round(summary(t_glmer_pcpt)$coeff["LV_VERSUS_HV:aptitude.ct", "Std. Error"],3),
       round(summary(t_glmer_pcpt)$coeff["LV_VERSUS_HVB:aptitude.ct", "Std. Error"],3))      
              
estimate = c(round(summary(t_glmer_pcpt)$coeff["LV_VERSUS_HV:aptitude.ct", "Estimate"],3),
       round(summary(t_glmer_pcpt)$coeff["LV_VERSUS_HVB:aptitude.ct", "Estimate"],3))  

h1_sd =  round(rep(summary(t_glmer_pcpt)$coeff["aptitude.ct","Estimate" ],2),3)
              
tails = c(1,1)
train_ap_table = data.frame(cbind(contrast, std.Error, estimate, h1_sd, tails)) 
kable(train_ap_table)
```


## compute bFs and ranges


```{r}
df= rbind(PI_ap_table, PN_ap_table, discrim_ap_table, WR_ap_table, train_ap_table)
df$std.Error = as.numeric(as.character(df$std.Error))
df$estimate = as.numeric(as.character(df$estimate))
df$h1_sd = as.numeric(as.character(df$h1_sd))
df$tails = as.numeric(as.character(df$tails))
df$h1_motivation = ""

df2 = Bf_table(df)

#kable(df2)

#kable(addBf_ranges2(df2))

# note our range of values for Bf 
sdtheoryrange = seq( from = 0, to = 5 , length.out = 100)

addBf_ranges(df2,sdtheoryrange) %>%
  kable() %>%
  kable_styling()

```


## estimate sample increase


Above we seen that we do not have substantial evidence for the null (or H1) with our current sample. We use a function to see how many participants we might need to see substanatial evidence for the null, assuming that the standard error decreases pro-portional to route N

```{r}

# note lines5-6 of df are the training data where we are looking at contrasts not using the same sample, so the N is less (40 rather than 60)
addBf_powercalc(df[1:4,], 60, 500)%>%
  kable() %>%
  kable_styling()

addBf_powercalc(df[5:6,], 40, 500)%>%
  kable() %>%
  kable_styling()


```