What’s New ?

The Top 10 favtutor Features You Might Have Overlooked

Read More

Chi-Square Test in R | With Code and Examples

  • Oct 18, 2023
  • 6 Minutes Read
  • Why Trust Us
    We uphold a strict editorial policy that emphasizes factual accuracy, relevance, and impartiality. Our content is crafted by top technical writers with deep knowledge in the fields of computer science and data science, ensuring each piece is meticulously reviewed by a team of seasoned editors to guarantee compliance with the highest standards in educational content creation and publishing.
  • By Abhisek Ganguly
Chi-Square Test in R | With Code and Examples

Chi-square tests are powerful statistical tools used to determine the association or independence between categorical variables. In the realm of data analysis and statistics, R has become a go-to platform for performing a wide range of tests and analyses. In this article, we will delve into the world of chi-square tests in R, covering everything from the basics to practical examples and interpretation of results.

What is the Chi-Square Test Method in R?

The chi-square test, often written as χ², is a statistical method used to determine if there is an association between two categorical variables. In R, you can perform different types of chi-square tests depending on your data and research question. Let's explore the key methods for conducting chi-square tests in R:

Chi-Square Test of Independence

The chi-square test of independence is used to examine the relationship between two categorical variables. It answers questions like "Is there a significant association between gender and voting preference?" To perform this test in R, you typically create a contingency table and use the `chisq.test()` function.

# Creating a contingency table
data <- matrix(c(100, 50, 30, 20), nrow = 2)
colnames(data) <- c("Male", "Female")
rownames(data) <- c("Vote A", "Vote B")

# Performing the chi-square test of independence
result <- chisq.test(data)


The output will include statistics like the chi-square value, degrees of freedom, and the p-value, which allows you to determine whether there is a significant association between the variables. 

Chi-Square Goodness-of-Fit Test

The chi-square goodness-of-fit test is used to assess whether the observed frequencies match the expected frequencies. It answers questions like "Do the observed allele frequencies match the expected Hardy-Weinberg equilibrium?" To conduct this test in R, you can use the chisq.test() function with an observed frequency vector and an expected frequency vector.

# Observed and expected frequencies
observed <- c(50, 30, 20)
expected <- c(45, 35, 20)

# Performing the chi-square goodness-of-fit test
result <- chisq.test(observed, p = expected / sum(expected))


This test provides insights into whether the observed data fits a particular theoretical distribution.

Chi-Square Test for Homogeneity

The chi-square test for homogeneity is used to determine if the distribution of a categorical variable is the same in multiple groups. It answers questions like "Is the preference for a particular smartphone brand the same across different age groups?" To conduct this test in R, you can use the chisq.test() function with a contingency table that represents the different groups.

# Creating a contingency table for smartphone brand preference
data <- matrix(c(30, 15, 25, 10, 40, 20), nrow = 2)
colnames(data) <- c("Age Group 1", "Age Group 2", "Age Group 3")
rownames(data) <- c("Brand A", "Brand B")

# Performing the chi-square test for homogeneity
result <- chisq.test(data)


This test helps you determine if there is a significant difference in the distribution of preferences across the groups.

Chi-Square Test in R: An Example

Now, let's walk through a real-world example to illustrate how to perform a chi-square test in R. Imagine you have survey data from 500 individuals regarding their preferred mode of transportation to work. The data is categorized into two variables: 'Gender' (Male or Female) and 'Transportation' (Car, Bicycle, or Public Transit).

We want to determine if there is a significant association between gender and mode of transportation.

# Create a sample dataset
set.seed(123)  # For reproducibility
gender <- sample(c("Male", "Female"), 500, replace = TRUE)
transportation <- sample(c("Car", "Bicycle", "Public Transit"), 500, replace = TRUE)
survey_data <- data.frame(Gender = gender, Transportation = transportation)

# Create a contingency table
table_data <- table(survey_data$Gender, survey_data$Transportation)

# Perform the chi-square test of independence
result <- chisq.test(table_data)


The result will provide you with the chi-square statistic, degrees of freedom, and the p-value, which you can use to determine whether there is a significant association between gender and mode of transportation.

How to Run a Chi-Square Test in R

To run a chi-square test in R, follow these steps:

1. Prepare Your Data: Organize your data into a contingency table or vectors of observed and expected frequencies, depending on the specific chi-square test you want to perform.

2. Choose the Appropriate Chi-Square Test: Select the chi-square test method that matches your research question. Is it a test of independence, goodness-of-fit, or homogeneity?

3. Use the chisq.test() Function: Apply the chisq.test() function to your data, passing the appropriate arguments for your chosen test. This function calculates the chi-square statistic and provides p-values and other statistics.

4. Interpret the Results: Examine the output of the test, paying attention to the chi-square statistic, degrees of freedom, and p-value. The p-value helps you determine the significance of the association or fit.

5. Make Inferences: Based on the p-value, you can make inferences about whether there is a significant relationship between the variables, whether the data fits a particular distribution, or whether the distributions are the same across groups.

How to Interpret Chi-Square Results in R

Interpreting chi-square results in R involves assessing the p-value and the chi-square statistic. Here's a guide to interpretation:

Chi-Square Statistic: This value represents the strength of the association or goodness of fit. A larger chi-square statistic indicates a stronger association or a poorer fit.

Degrees of Freedom (df): Degrees of freedom are related to the number of categories and the size of the contingency table. For a chi-square test of independence, df = (r - 1)(c - 1), where 'r' is the number of rows and 'c' is the number of columns in the contingency table.

P-Value: The p-value measures the probability of obtaining the observed results (or more extreme results) under the null hypothesis. In general, a smaller p-value (typically ≤ 0.05) indicates a significant relationship or a poor fit.

  • If p ≤ 0.05: Reject the null hypothesis, suggesting a significant relationship or a poor fit.
  • If p > 0.05: Fail to reject the null hypothesis, suggesting no significant relationship or a good fit

Remember that statistical significance does not imply practical significance. Even if a relationship is statistically significant, it might not have meaningful implications in the real world. It's essential to consider the context and domain knowledge when interpreting the results.


R offers a robust platform for conducting chi-square tests to explore associations and fit in categorical data. By following the steps outlined in this article and understanding how to interpret the results, you can harness the power of chi-square tests in R for insightful data analysis and hypothesis testing. Whether you are a researcher, data analyst, or student, mastering this statistical tool in R is a valuable skill in your data science toolkit.

FavTutor - 24x7 Live Coding Help from Expert Tutors!

About The Author
Abhisek Ganguly
Passionate machine learning enthusiast with a deep love for computer science, dedicated to pushing the boundaries of AI through academic research and sharing knowledge through teaching.