Title: | Multidimensional Top Scoring for Creativity Research |
---|---|
Description: | Implementation of Multidimensional Top Scoring method for creativity assessment proposed in Boris Forthmann, Maciej Karwowski, Roger E. Beaty (2023) <doi:10.1037/aca0000571>. |
Authors: | Jakub Jędrusiak [aut, cre, cph] (<https://orcid.org/0000-0002-6481-8210>, University of Wrocław), Boris Forthmann [aut, rev] (<https://orcid.org/0000-0001-9755-7304>, University of Münster), Roger E. Beaty [aut] (<https://orcid.org/0000-0001-6114-5973>, Pennsylvania State University), Maciej Karwowski [aut] (<https://orcid.org/0000-0001-6974-1673>, University of Wrocław) |
Maintainer: | Jakub Jędrusiak <[email protected]> |
License: | MIT + file LICENSE |
Version: | 1.0.2 |
Built: | 2024-10-31 06:07:04 UTC |
Source: | https://github.com/jakub-jedrusiak/mtscr |
Shiny app used as graphical interface for mtscr. Simply invoke mtscr_app()
to run.
mtscr_app()
mtscr_app()
To use the GUI you need to have the following packages installed:
DT
, broom.mixed
, datamods
, writexl
.
First thing you see after running the app is datamods
window for importing your data.
You can use the data already loaded in your environment or any other option.
Then you'll see four dropdown lists used to choose arguments for mtscr_model()
and mtscr_score()
functions. Consult these functions' documentation for
more details (execute ?mtscr_score
in the console). When the parameters are chosen,
click "Generate model" button. After a while (up to a dozen or so seconds) models'
parameters and are shown along with a scored dataframe.
You can download your data as a .csv or an .xlsx file using buttons in the sidebar.
You can either download the scores only (i.e. the dataframe you see displayed) or
your whole data with .all_max
and .all_top2
columns added.
For testing purposes, you may use mtscr_creativity
dataframe. In the importing
window change "Global Environment" to "mtscr" and our dataframe should appear
in the upper dropdown list. Use id
for the ID column, item
for the item
column and SemDis_MEAN
for the score column.
Runs the app. No explicit return value.
mtscr_score()
for more information on the arguments.
mtscr_creativity for more information about the example dataset.
Forthmann, B., Karwowski, M., & Beaty, R. E. (2023). Don’t throw the “bad” ideas away! Multidimensional top scoring increases reliability of divergent thinking tasks. Psychology of Aesthetics, Creativity, and the Arts. doi:10.1037/aca0000571
if(interactive()){ mtscr_app() }
if(interactive()){ mtscr_app() }
A dataset from Forthmann, Karwowski & Beaty (2023) paper. It contains a set of responses in Alternative Uses Task for different items with their semantic distance assessment.
mtscr_creativity
mtscr_creativity
mtscr_creativity
A tibble
with 4585 rows and 10 columns:
patricipant's unique identification number
response in AUT
item for which alternative uses were searched for
mean semantic distance
a tibble
Create MTS model for creativity analysis.
mtscr_model( df, id_column, item_column = NULL, score_column, top = 1, prepared = FALSE, ties_method = c("random", "average"), normalise = TRUE, self_ranking = NULL )
mtscr_model( df, id_column, item_column = NULL, score_column, top = 1, prepared = FALSE, ties_method = c("random", "average"), normalise = TRUE, self_ranking = NULL )
df |
Data frame in long format. |
id_column |
Name of the column containing participants' id. |
item_column |
Optional, name of the column containing distinct trials (e.g. names of items in AUT). |
score_column |
Name of the column containing divergent thinking scores (e.g. semantic distance). |
top |
Integer or vector of integers (see examples), number of top answers to include in the model. Default is 1, i.e. only the top answer. |
prepared |
Logical, is the data already prepared with |
ties_method |
Character string specifying how ties are treated when
ordering. Can be |
normalise |
Logical, should the creativity score be normalised? Default is |
self_ranking |
Name of the column containing answers' self-ranking.
Provide if model should be based on top answers self-chosen by the participant.
Every item should have its own ranks. The top answers should have a value of 1,
and the other answers should have a value of 0. In that case, the |
The return value depends on length of the top
argument. If top
is a single
integer, a glmmTMB
model is returned. If top
is a vector of integers, a list
of glmmTMB
models is returned, with names corresponding to the top
values,
e.g. top1
, top2
, etc.
data("mtscr_creativity", package = "mtscr") mtscr_creativity <- mtscr_creativity |> dplyr::slice_sample(n = 300) # for performance, ignore mtscr_model(mtscr_creativity, id, item, SemDis_MEAN) |> summary() # three models for top 1, 2, and 3 answers mtscr_model(mtscr_creativity, id, item, SemDis_MEAN, top = 1:3) |> mtscr_model_summary() # you can prepare data first data <- mtscr_prepare(mtscr_creativity, id, item, SemDis_MEAN) mtscr_model(data, id, item, SemDis_MEAN, prepared = TRUE) # extract effects for creativity score by hand model <- mtscr_model(mtscr_creativity, id, item, SemDis_MEAN, top = 1) creativity_score <- glmmTMB::ranef(model)$cond$id[, 1]
data("mtscr_creativity", package = "mtscr") mtscr_creativity <- mtscr_creativity |> dplyr::slice_sample(n = 300) # for performance, ignore mtscr_model(mtscr_creativity, id, item, SemDis_MEAN) |> summary() # three models for top 1, 2, and 3 answers mtscr_model(mtscr_creativity, id, item, SemDis_MEAN, top = 1:3) |> mtscr_model_summary() # you can prepare data first data <- mtscr_prepare(mtscr_creativity, id, item, SemDis_MEAN) mtscr_model(data, id, item, SemDis_MEAN, prepared = TRUE) # extract effects for creativity score by hand model <- mtscr_model(mtscr_creativity, id, item, SemDis_MEAN, top = 1) creativity_score <- glmmTMB::ranef(model)$cond$id[, 1]
Summarise a model generated with mtscr_model
with
some basic statistics; calculate the empirical reliability
and the first difference of the empirical reliability.
mtscr_model_summary(model)
mtscr_model_summary(model)
model |
A model generated with |
A data frame with the following columns:
The model number
Number of observations
The square root of the estimated residual variance
The log-likelihood of the model
The Akaike information criterion
The Bayesian information criterion
The residual degrees of freedom
The empirical reliability
The first difference of the empirical reliability
data("mtscr_creativity", package = "mtscr") mtscr_model(mtscr_creativity, id, item, SemDis_MEAN, top = 1:3) |> mtscr_model_summary()
data("mtscr_creativity", package = "mtscr") mtscr_model(mtscr_creativity, id, item, SemDis_MEAN, top = 1:3) |> mtscr_model_summary()
Prepare database for MTS analysis.
mtscr_prepare( df, id_column, item_column = NULL, score_column, top = 1, minimal = FALSE, ties_method = c("random", "average"), normalise = TRUE, self_ranking = NULL )
mtscr_prepare( df, id_column, item_column = NULL, score_column, top = 1, minimal = FALSE, ties_method = c("random", "average"), normalise = TRUE, self_ranking = NULL )
df |
Data frame in long format. |
id_column |
Name of the column containing participants' id. |
item_column |
Optional, name of the column containing distinct trials (e.g. names of items in AUT). |
score_column |
Name of the column containing divergent thinking scores (e.g. semantic distance). |
top |
Integer or vector of integers (see examples), number of top answers to prepare indicators for. Default is 1, i.e. only the top answer. |
minimal |
Logical, append columns to df ( |
ties_method |
Character string specifying how ties are treated when
ordering. Can be |
normalise |
Logical, should the creativity score be normalised? Default is |
self_ranking |
Name of the column containing answers' self-ranking.
Provide if model should be based on top answers self-chosen by the participant.
Every item should have its own ranks. The top answers should have a value of 1,
and the other answers should have a value of 0. In that case, the |
The input data frame with additional columns:
.z_score
Numerical, z-score of the creativity score
.ordering
Numerical, ranking of the answer relative to participant and item
.ordering_topX
Numerical, 0 for X top answers, otherwise value of .ordering
Number of .ordering_topX
columns depends on the top
argument. If minimal = TRUE
,
only the new columns and the item and id columns are returned. The values are
relative to the participant AND item, so the values for different
participants scored for different tasks (e.g. uses for "brick" and "can") are distinct.
data("mtscr_creativity", package = "mtscr") # Indicators for top 1 and top 2 answers mtscr_prepare(mtscr_creativity, id, item, SemDis_MEAN, top = 1:2, minimal = TRUE)
data("mtscr_creativity", package = "mtscr") # Indicators for top 1 and top 2 answers mtscr_prepare(mtscr_creativity, id, item, SemDis_MEAN, top = 1:2, minimal = TRUE)
Score creativity with MTS
mtscr_score( df, id_column, item_column = NULL, score_column, top = 1, format = c("minimal", "full"), ties_method = c("random", "average"), normalise = TRUE, self_ranking = NULL )
mtscr_score( df, id_column, item_column = NULL, score_column, top = 1, format = c("minimal", "full"), ties_method = c("random", "average"), normalise = TRUE, self_ranking = NULL )
df |
Data frame in long format. |
id_column |
Name of the column containing participants' id. |
item_column |
Optional, name of the column containing distinct trials (e.g. names of items in AUT). |
score_column |
Name of the column containing divergent thinking scores (e.g. semantic distance). |
top |
Integer or vector of integers (see examples), number of top answers to prepare indicators for. Default is 1, i.e. only the top answer. |
format |
Character, controls the format of the output data frame. Accepts:
|
ties_method |
Character string specifying how ties are treated when
ordering. Can be |
normalise |
Logical, should the creativity score be normalised? Default is |
self_ranking |
Name of the column containing answers' self-ranking.
Provide if model should be based on top answers self-chosen by the participant.
Every item should have its own ranks. The top answers should have a value of 1,
and the other answers should have a value of 0. In that case, the |
A tibble with creativity scores. If format = "full"
, the original data frame is
returned with scores columns added. Otherwise, only the scores and id columns are returned.
number of creativity scores columns (e.g. creativity_score_top2
) depends on the top
argument.
tidyr::pivot_wider()
for converting the output to wide format by yourself.
data("mtscr_creativity", package = "mtscr") mtscr_score(mtscr_creativity, id, item, SemDis_MEAN, top = 1:2) # add scores to the original data frame mtscr_score(mtscr_creativity, id, item, SemDis_MEAN, format = "full") # use self-chosen best answers data("mtscr_self_rank", package = "mtscr") mtscr_score(mtscr_self_rank, subject, task, avr, self_ranking = top_two)
data("mtscr_creativity", package = "mtscr") mtscr_score(mtscr_creativity, id, item, SemDis_MEAN, top = 1:2) # add scores to the original data frame mtscr_score(mtscr_creativity, id, item, SemDis_MEAN, format = "full") # use self-chosen best answers data("mtscr_self_rank", package = "mtscr") mtscr_score(mtscr_self_rank, subject, task, avr, self_ranking = top_two)
An example dataset with best answers self-chosen by the participant. Use with self_ranking
argument in mtscr_model.
mtscr_self_rank
mtscr_self_rank
mtscr_self_rank
A tibble with 3225 rows and 4 columns:
patricipant's unique identification number
divergent thinking task number
average judges' raiting
indicator of self-chosen two best answer; 1 if chosen, 0 if not