Research question(s). State your research question (s) clearly.
Is there an inherent racial bias in image tagging emotion analyses services in their reading of emotions as compared to a real person reading emotion?
Does this corroborate any previous research on the bias towards black individuals erroneously being perceived as angry or hostile through their expressions? What implications does this have on visual cognitive services if so?
Data collection and cleaning
Have an initial draft of your data cleaning appendix. Document every step that takes your raw data file(s) and turns it into the analysis-ready data set that you would submit with your final project. Include text narrative describing your data collection (downloading, scraping, surveys, etc) and any additional data curation/cleaning (merging data frames, filtering, transformations of variables, etc). Include code for data curation/cleaning, but not collection.
Cleaning code done and shown in appenicies.qmd
Downloaded the data and read it in from Excel sheet
Dropped unnecessary variables that do not aid us for the question
Renamed the rest of the columns and added them to clean_person_data
Mutated data frame and cleaned race and gender’s values
Parsed through url’s and coded two new variables to match the EAS data
Dropped url variable
Wrote both data frames to csv for new clean data tables
For the computer data:
Changed the code names to emotions to match
Data description
In the first data set we are using, person_data_clean, each row represents a different person that is tasked with describing the facial emotion of a picture of someone. The columns of this data set include information about the evaluator, including the city they are from, their race, and their gender. Other columns include their first and second choice of emotion that they would use to describe the person in the photo, as well as the emotion that they would not use. The second data set that we are using, eas_data, has a column has the value of the correct emotion of the picture, as well as the portion of evaluators who classified the emotion expressed in the picture for 7 different emotions.
These data sets were created in order to determine if popular vision-based cognitive software that infers emotion from a person’s face perpetuates racial and gender sterotypes concerning emotion. This is espically relevant in modern society where this type of imaging technology is used in a wide range of applications, from social networks and smartphone applications to real time security. The creation of this data set was funded by Harvard University.
Data limitations
There are a few potential problems with the data set. The first is that the we are looking at two different data sets and want to base our conclusion off of their comparison. However, the two datasets are very different so they are difficult to compare. Second, there are a lot of columns in the dataset making it difficult to decide which ones are important when creating visualizations and analysing the data. The data also includes a link to the image shown in the study rather than a description in the dataset; in fact, the person_data dataset only includes the race of the participant and not of the person in the image which may make it difficult to make conclusions on racial bias.
Exploratory data analysis
Perform an (initial) exploratory data analysis.
library(readxl)library(dplyr)
Attaching package: 'dplyr'
The following objects are masked from 'package:stats':
filter, lag
The following objects are masked from 'package:base':
intersect, setdiff, setequal, union