Lab 01 - Hello R!

Learning goals

Terminology

We’ve already thrown around a few new terms, so let’s define them before we proceed.

You can think about it as R is the engine of the car, and RStudio is the dashboard.

Git is like the “Track Changes” features from Microsoft Word on steroids, and GitHub is the home for your Git-based projects on the internet (like DropBox but much, much better).

Starting slow

As the labs progress, you are encouraged to explore beyond what the labs dictate; a willingness to experiment will make you a much better programmer. Before we get to that stage, however, you need to build some basic fluency in R. Today we begin with the fundamental building blocks of R and RStudio: the interface, reading in data, and basic commands.

And to make versioning simpler, this is a solo lab. Additionally, we want to make sure everyone gets a significant amount of time at the steering wheel. Next week you’ll learn about collaborating on GitHub and produce a single lab report for your team.

Getting started

Before you can get started with the analysis, you will first need a GitHub account, and to associate that account with the GitHub organization.

Create a GitHub account

Go to github.com and create an account (unless you already have one). Below are some tips for selecting a username:1 Happy git with R by Jenny Bryan

Join the course organization

Workflow overview

This action is called cloning.

For each assignment in this course you will start with a GitHub repo that I created for you and that contains the starter documents you will build upon when working on your assignment. The first step is always to bring these files into RStudio so that you can edit them, run them, view your results, and interpret them.

Then you will work in RStudio on the data analysis, making commits along the way (snapshots of your changes) and finally push all your work back to GitHub.

The next few steps will walk you through the process of getting information of the repo to be cloned, launching RStudio, and cloning your repo, and getting started with the analysis.

1. Get URL of repo to be cloned

On GitHub, click on the green Clone or download button, select Use HTTPS (this might already be selected by default, and if it is, you’ll see the text Clone with HTTPS as in the image below). Click on the clipboard icon to copy the repo URL.

2. Launch RStudio

This step is a bit tedious, but it’ll soon become second nature. Note that you can access RStudio anywhere using these steps, you don’t need to be in the computer lab for it.



3. Clone the repo

And finally we clone the GitHub repo into an RStudio project.

4. Get working!

Open the R Markdown (Rmd) file called lab-01-hello-r.Rmd. It will likely ask you if you would like to install a package that is required, click Install.

Hello RStudio!

RStudio is comprised of four panes.

Packages

R is an open-source language, and developers contribute functionality to R via packages. In this lab we will work with three packages: datasauRus which contains the dataset, tidyverse which is a collection of packages for doing data analysis in a “tidy” way, and usethis for setting our Git credentials.

Load these packages by running the following in the Console.

library(tidyverse) 
library(datasauRus)
library(usethis)

Note that the packages are also loaded with the same commands in your R Markdown document.

Hello Git!

Your email address is the address tied to your GitHub account and your name should be first and last name.

Before we can get started we need to take care of some required housekeeping. Specifically, we need to do some configuration so that RStudio can communicate with GitHub. This requires two pieces of information: your email address and your name.

Run the following (but update it for your name and email!) in the Console to configure git:

use_git_config(user.name = "Your Name", 
               user.email = "your.email@address.com")

Warm up

Before we introduce the data, let’s warm up with some simple exercises.

The top portion of your R Markdown file (between the three dashed lines) is called YAML. It stands for “YAML Ain’t Markup Language”. It is a human friendly data serialization standard for all programming languages. All you need to know is that this area is called the YAML (we will refer to it as such) and that it contains meta information about your document.

YAML

Open the R Markdown (Rmd) file in your project, change the author name to your name, and knit the document.

Commit

Then Go to the Git pane in your RStudio.

If you have made changes to your Rmd file, you should see it listed here. Click on it to select it in this list and then click on Diff. This shows you the difference between the last committed state of the document and its current state that includes your changes. If you’re happy with these changes, write “Update author name” in the Commit message box and hit Commit.

You don’t have to commit after every change, this would get quite cumbersome. You should consider committing states that are meaningful to you for inspection, comparison, or restoration. In the first few assignments we will tell you exactly when to commit and in some cases, what commit message to use. As the semester progresses we will let you make these decisions.

Push

Now that you have made an update and committed this change, it’s time to push these changes to the web! Or more specifically, to your repo on GitHub. Why? So that others can see your changes. And by others, we mean the course teaching team (your repos in this course are private to you and us, only).

In order to push your changes to GitHub, click on Push. This will prompt a dialogue box where you first need to enter your user name, and then your password. This might feel cumbersome. Bear with me… We will teach you how to save your password so you don’t have to enter it every time. But for this one assignment you’ll have to manually enter each time you push in order to gain some experience with it.

Thought exercise

For which of the above steps (changing project name, making updates to the document, committing, and pushing changes) do you need to have an internet connection? Discuss with your classmates.

Data

If it’s confusing that the data frame is called datasaurus_dozen when it contains 13 datasets, you’re not alone! Have you heard of a baker’s dozen?

The data frame we will be working with today is called datasaurus_dozen and it’s in the datasauRus package. Actually, this single data frame contains 13 datasets, designed to show us why data visualisation is important and how summary statistics alone can be misleading. The different datasets are maked by the dataset variable.

To find out more about the dataset, type the following in your Console: ?datasaurus_dozen. A question mark before the name of an object will always bring up its help file. This command must be ran in the Console.

  1. Based on the help file, how many rows and how many columns does the datasaurus_dozen file have? What are the variables included in the data frame? Add your responses to your lab report. When you’re done, commit your changes with the commit message “Added answer for Ex 1”, and push.

Let’s take a look at what these datasets are. To do so we can make a frequency table of the dataset variable:

datasaurus_dozen %>%
  count(dataset)
## # A tibble: 13 x 2
##    dataset        n
##    <chr>      <int>
##  1 away         142
##  2 bullseye     142
##  3 circle       142
##  4 dino         142
##  5 dots         142
##  6 h_lines      142
##  7 high_lines   142
##  8 slant_down   142
##  9 slant_up     142
## 10 star         142
## 11 v_lines      142
## 12 wide_lines   142
## 13 x_shape      142

Matejka, Justin, and George Fitzmaurice. “Same stats, different graphs: Generating datasets with varied appearance and identical statistics through simulated annealing.” Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 2017.

The original Datasaurus (dino) was created by Alberto Cairo in this great blog post. The other Dozen were generated using simulated annealing and the process is described in the paper Same Stats, Different Graphs: Generating Datasets with Varied Appearance and Identical Statistics through Simulated Annealing by Justin Matejka and George Fitzmaurice. In the paper, the authors simulate a variety of datasets that the same summary statistics to the Datasaurus but have very different distributions.

Data visualization and summary

  1. Plot y vs. x for the dino dataset. Then, calculate the correlation coefficient between x and y for this dataset.

Below is the code you will need to complete this exercise. Basically, the answer is already given, but you need to include relevant bits in your Rmd document and successfully knit it and view the results.

Start with the datasaurus_dozen and pipe it into the filter function to filter for observations where dataset == "dino". Store the resulting filtered data frame as a new data frame called dino_data.

dino_data <- datasaurus_dozen %>%
  filter(dataset == "dino")

There is a lot going on here, so let’s slow down and unpack it a bit.

First, the pipe operator: %>%, takes what comes before it and sends it as the first argument to what comes after it. So here, we’re saying filter the datasaurus_dozen data frame for observations where dataset == "dino".

Second, the assignment operator: <-, assigns the name dino_data to the filtered data frame.

Next, we need to visualize these data. We will use the ggplot function for this. Its first argument is the data you’re visualizing. Next we define the aesthetic mappings. In other words, the columns of the data that get mapped to certain aesthetic features of the plot, e.g. the x axis will represent the variable called x and the y axis will represent the variable called y. Then, we add another layer to this plot where we define which geometric shapes we want to use to represent each observation in the data. In this case we want these to be points,m hence geom_point.

ggplot(data = dino_data, mapping = aes(x = x, y = y)) +
  geom_point()

If this seems like a lot, it is. And you will learn about the philosophy of building data visualizations in layer in detail in the next class. For now, follow along with the code that is provided.

For the second part of this exercises, we need to calculate a summary statistic: the correlation coefficient. Correlation coefficient, often referred to as \(r\) in statistics, measures the linear association between two variables. You will see that some of the pairs of variables we plot do not have a linear relationship between them. This is exactly why we want to visualize first: visualize to assess the form of the relationship, and calculate \(r\) only if relevant. In this case, calculating a correlation coefficient really doesn’t make sense since the relationship between x and y is definitely not linear – it’s dinosaurial!

But, for illustrative purposes, let’s calculate correlation coefficient between x and y.

Start with dino_data and calculate a summary statistic that we will call r as the correlation between x and y.

dino_data %>%
  summarize(r = cor(x, y))
## # A tibble: 1 x 1
##         r
##     <dbl>
## 1 -0.0645

This is a good place to pause, commit changes with the commit message “Added answer for Ex 2”, and push.

  1. Plot y vs. x for the star dataset. You can (and should) reuse code we introduced above, just replace the dataset name with the desired dataset. Then, calculate the correlation coefficient between x and y for this dataset. How does this value compare to the r of dino?

This is another good place to pause, commit changes with the commit message “Added answer for Ex 3”, and push.

  1. Plot y vs. x for the circle dataset. You can (and should) reuse code we introduced above, just replace the dataset name with the desired dataset. Then, calculate the correlation coefficient between x and y for this dataset. How does this value compare to the r of dino?

You should pause again, commit changes with the commit message “Added answer for Ex 4”, and push.

Facet by the dataset variable, placing the plots in a 3 column grid, and don’t add a legend.

  1. Finally, let’s plot all datasets at once. In order to do this we will make use of facetting.
ggplot(datasaurus_dozen, aes(x = x, y = y, color = dataset))+
  geom_point()+
  facet_wrap(~ dataset, ncol = 3) +
  theme(legend.position = "none")

And we can use the group_by function to generate all correlation coefficients.

datasaurus_dozen %>%
  group_by(dataset) %>%
  summarize(r = cor(x, y))

You’re done with the data analysis exercises, but we’d like you to do two more things:

Click on the gear icon in on top of the R Markdown document, and select “Output Options…” in the dropdown menu. In the pop up dialogue box go to the Figures tab and change the height and width of the figures, and hit OK when done. Then, knit your document and see how you like the new sizes. Change and knit again and again until you’re happy with the figure sizes. Note that these values get saved in the YAML.

You can also use different figure sizes for differen figures. To do so click on the gear icon within the chunk where you want to make a change. Changing the figure sizes added new options to these chunks: fig.width and fig.height. You can change them by defining different values directly in your R Markdown document as well.

Once again click on the gear icon in on top of the R Markdown document, and select “Output Options…” in the dropdown menu. In the General tab of the pop up dialogue box try out different syntax highlighting and theme options. Hit OK and knit your document to see how it looks. Play around with these until you’re happy with the look.


Not sure how to use emojis on your computer? Maybe a class mate can help? Or you can ask your tutor as well!

Yay, you’re done! Commit all remaining changes, use the commit message “Done with Lab 1! 💪”, and push. Before you wrap up the assignment, make sure all documents are updated on your GitHub repo.



Getting to know you

If you’re done with your lab, use the remaining time to take the Getting to know you survey. If time is up for the lab session, complete the survey at home before tomorrow’s class.