An introduction to {shinyValidator}


2022-11-14

David Granjon, Novartis

Welcome

Hi, I am David Granjon

Senior Software developer at Novartis.

We’re in for 2 hours of fun!

  • Grab a ☕
  • Make yourself comfortable 🛋 or 🧘
  • Ask questions ❓

Program

  1. Introduction 10 min
  2. Setup {shinyValidator} 20 min
  3. Discover {shinyValidator} 30 min
  4. Customize {shinyValidator} 40 min
  5. Add CI/CD (if time allows)
  6. Q&A

Workshop Material

Clone this repository with the RStudio IDE or via the command line.

git clone https://github.com/RinteRface/rinpharma2022.git
cd rinpharma2022

Then run renv::restore() to install the dependencies.

During the workshop day, a live sandbox platform is accessible at rstd.io/class. The id will be given in the zoom chat.

Pre-requisites

If you want to run {shinyValidator} locally (not on CI/CD), you must have:

  • shinycannon installed for the load-test part. See here.

  • A chrome browser installed like chromium.

  • git installed and a GitHub account.

  • A recent R version, if possible R>= 4.1.0.

This workshop is recorded. If it does not work for you, it is better to listen and stay on track. Fear not! You can retry later on your side with the recording.

Introduction

Clothes don’t make the man

Your app may be as beautiful and as cool as you want, it is useless if it does not start/run.

From prototype to production

How do we transition❓

Reliable : is the app doing what it is intended to do?

Stable : how often does it crash?

Available : is the app fast enough to handle multiple concurrent users?

In practice, a few apps meet all these requirements 😈.

Available tools

Hex logo of golem package.

  • Easier checking, linting, documentation and testing.
  • Just … easier. 😀

Hex logo of renv package.

  • Fix package versions.
  • Increased reproducibility.

Hex logo of testthat package

  • Unit tests: test business logic.
  • Server testing: test how Shiny modules or pieces work together (with reactivity).
  • UI testing: test UI components, snapshots, headless-testing (shinytest2).

Are there bottlenecks?

  • Load testing: How does the app behave with 10 simultaneous user? shinyloadtest.
  • Profiling: What part of my app is slow?profvis.
  • Reactivity: Are there any reactivity issues? .

Automate: CI/CD

  • Continuous integration: automatically check new features. 🏥
  • Continuous deployment: automatically deploy content. ✉️
  • Running on a remote environment ☁️:
  • Automated.
  • More reproducible (more os/R flavors available).
  • Time saver.
  • Less duplication.

Not easy 😢

  • Select DevOps platform (GitLab, GitHub, …).
  • Add version control (git knowledge).
  • Build custom GitLab runner (optional).
  • Write CI/CD instructions (better support for GitHub).

Can’t we make things easier❓

Stop … I am lost …

Sad background image.
  • There are just so many tools! How to use them properly?
  • Is there a way to automate all of this? I just don’t have time … 😞

Welcome {shinyValidator}

  • Integrate all previous mentioned tools.
  • Produces a single HTML report output.
  • Flexible.

Setup {shinyValidator}

{golem}

We create an empty golem project1:

path <- file.path("<FOLDER>", "<PKG>") 
golem::create_golem(path)
# ...

{golem}

We add some useful files, basic test and link to git:

path <- file.path("<FOLDER>", "<PKG>") 
golem::create_golem(path)
usethis::use_mit_license() # or whatever license
usethis::use_testthat()
usethis::use_test("dummy")
usethis::use_git()

Put some real server code

Copy this into app_server.R:

output$distPlot <- renderPlot({
  hist(rnorm(input$obs))
})

Copy this into app_ui.R:

fluidPage(
  sliderInput(
    "obs",
    "Number of observations:",
    min = 0,
    max = 1000,
    value = 500
  ),
  plotOutput("distPlot")
)

Create empty GitHub repo

Browse to GitHub and create an empty repository called <PKG> matching the previously created package.

Add remote repo to local

How to init a GitHub repository.

Go to terminal tab under RStudio:

git remote add origin <LINK COPIED FROM GITHUB>
git branch -M main
git push -u origin main

{renv}

Initialize renv for R package dependencies:

system("echo 'RENV_PATHS_LIBRARY_ROOT = ~/.renv/library' >> .Renviron")
Restart R to consider .Renviron

{renv}

system("echo 'RENV_PATHS_LIBRARY_ROOT = ~/.renv/library' >> .Renviron")

# SCAN the project and look for dependencies
renv::init()
# install missing packages
renv::install("<PACKAGE>")
# Capture new dependencies after package installation
renv::snapshot()
Code output showing successful renv setup.

Install {shinyValidator}

devtools::install_github("Novartis/shinyValidator")
library(shinyValidator)
# At the root of your R package
use_validator("github")
devtools::document() # update help
renv::snapshot()

Review the file structure

{shinyValidator}: step by step

Overall concept

%%{init: {'theme':'dark'}}%%
flowchart TD
  subgraph CICD
    direction TB
    subgraph DMC 
      direction LR
      E[Lint] --> F[Quality]
      F --> G[Performance]
    end
    subgraph POC 
      direction LR
      H[Lint] --> I[Quality]
    end
  end
  A(Shiny Project) --> B(DMC App)
  A --> C(Poof of concept App POC)
  B --> |strict| D[Expectations]
  C --> |low| D
  D --> CICD 
  CICD --> |create| J(Global HTML report)
  J --> |deploy| K(Deployment server)
  click A callback "Tooltip for a callback"
  click B callback "DMC: data monitoring committee"
  click D callback "Apps have different expectations"
  click E callback "Lint code: check code formatting, style, ..."
  click F callback "Run R CMD check + headless crash test (shinytest2)"
  click G callback "Optional tests: profiling, load test, ..."
  click J callback "HTML reports with multiple tabs"
  click K callback "RStudio Connect, GitLab/GitHub pages, ..."

Audit app

audit_app() is the main function 1:

audit_app <- function(
  headless_actions = NULL,
  timeout = NULL,
  scope = c("manual", "DMC", "POC"),
  output_validation = FALSE,
  coverage = TRUE,
  load_testing = TRUE,
  profile_code = TRUE,
  check_reactivity = TRUE,
  ...
) {
  ###
}
  • headless actions: pass shinytest2 instructions.
  • timeout: wait app to start.
  • : parameters to pass to run_app() such as database logins, …
  • scope: predefined set of parameters (see examples).

Audit app: example

audit_app(profile_code = FALSE, ...) 

%%{init: {'theme':'dark'}}%%
graph TD
  A(Check) --> B(Crashtest)
  B --> C(Loadtest)
  C --> D(Coverage)
  D --> E(Reactivity)
  click A callback "devtools::check"
  click B callback "{shinytest2}"
  click C callback "{shinyloadtest}"
  click D callback "{covr}"
  click E callback "{reactlog}"

Audit app: using scope parameter

audit_app(scope = "POC", ...)

%%{init: {'theme':'dark'}}%%
graph LR
  A(Check) --> B(Crashtest)

Audit app: headless manipulation (1/2)

# app refers to the headless app instance.
audit_app({
  app$set_inputs(obs = 30)
  app$get_screenshot("plop.png")
  # ... pass any other commands from shinytest2 API
})

shinytest2 hex logo.

This code is run during crash test, profiling and reactivity check.

Headless manipulation: your turn 🎮 (2/2)

Run the following code step by step1:

# Start the app 
library(shinytest2)
headless_app <- AppDriver$new("./app.R")
# View the app for debugging (does not work from Workbench!)
headless_app$view()
headless_app$set_inputs(obs = 1)
headless_app$get_value(input = "obs")
# You can also run JS code!
headless_app$run_js(
  "$('#obs')
    .data('shiny-input-binding')
    .setValue(
      $('#obs'), 
      100
    );
  "
)
# Now you can call any function
# Close the connection before leaving 
headless_app$stop()

Headless test debugging tools.

About monkey testing (1/2)

run_crash_test() runs a gremlins.js test if no headless action are passed:

About monkey testing (2/2)

Your turn 👩‍🔬

  1. Run ./app.R in an external browser.
  2. Open the developer tools (ctrl + shift (Maj) + I for Windows, option + command + I on Mac).
  3. Browse to https://marmelab.com/gremlins.js/ and copy the Bookmarklet Code on the right.
  4. Copy this code into the Shiny app HTML inspector JS console.
  5. Enjoy that moment.

Monkey test screnshot.

Report example

Your turn 🎮

  1. From the R console, call shinyValidator::audit_app(scope = "POC").
  2. Look at the logs messages.
  3. When done open public/index.html (external browser).
  4. Explore the report.
  5. Modify app code and rerun …

Pro tip

Cleanup between each run!

After each `shinyValidator::audit_app`, remove the /public folder and restart the R session.

Improve

Your turn 🎮

Disable other checks

Modify shinyValidator::audit_app parameters:

shinyValidator::audit_app(
  load_testing = FALSE, 
  profile_code = FALSE, 
  check_reactivity = FALSE
)
For learning purposes we disable load test, profiling, ... at the moment ...

Add server testing

Goal: test reactivity and how pieces work together ...
usethis::use_test("app-server-test")

Add server testing

usethis::use_test("app-server-test")
# Inside app-server-test
testServer(app_server, {
  session$setInputs(obs = 0)
  # There should be an error
  expect_error(output$distPlot)
  session$setInputs(obs = 100)
  str(output$distPlot)
})
# Test it
devtools::test()
Server tests are run without the UI, inputs have to be changed manually with session.

Run shinyValidator::audit_app and have a look at the coverage tab.

Customize Crash test

Goal: UI feature testing. Is the app starting? Is the background dark? ...

Leverage shinytest2 power1, app being the Shiny app to audit.

shinyValidator::audit_app(
  {
    app$set_inputs(obs = 1000)
    app$get_screenshot("plop.png")
  },
  load_testing = FALSE, 
  profile_code = FALSE, 
  check_reactivity = FALSE
)

Run the above code and have a look at the screenshots.

Output checks (1/3)

Goal: track if an output has changed after a commit...

Create this function in helpers.R:

make_hist <- function(val) {
  hist(rnorm(val))
}

Add it to app_server.R:

output$distPlot <- renderPlot({
  make_hist(input$obs)
})

Enable output check in shinyValidator::audit_app:

shinyValidator::audit_app(
  load_testing = FALSE, 
  profile_code = FALSE, 
  check_reactivity = FALSE, 
  output_validation = TRUE
)

Output checks (2/3)

Create a new test:

usethis::use_test("base-plot")

Output checks (2/3)

usethis::use_test("test-base-plot")
renv::install("vdiffr")
# Inside test-base-plot
test_that("Base plot OK", {
  set.seed(42) # to avoid the test from failing due to randomness :)
  vdiffr::expect_doppelganger("Base graphics histogram", make_hist(500))
})
# Test it
devtools::test()
A svg snapshot is created during first run. If you change the plot, snapshots are compared.

Output checks (3/3)

  1. We slightly modify make_hist():
make_hist <- function(val) {
  hist(rnorm(val * 2))
}
  1. Run the test again:
devtools::test()
  1. Failure may be reviewed with:
testthat::snapshot_review('basic-plot')
  1. Run shinyValidator::audit_app and look at the outputs tab.

Performance: Code profiling (1/2)

Goal: track any performance bottleneck.

Add this1 to helpers.R:

slow_func <- function(n) {
  vec <- NULL # Or vec = c()
  for (i in seq_len(n))
    vec <- c(vec, i)
  vec
}

Call it in app_server.R:

app_server <- function(input, output, session) {
  output$distPlot <- renderPlot({
    slow_func(5*10^4) # you may reduce if needed
    make_hist(input$obs)
  })
}

Performance: Code profiling (2/3)

Modify the custom headless script by adding a timeout:

shinyValidator::audit_app(
  {
    app$set_inputs(obs = 1000, timeout_ = 15 * 1000)
    app$get_screenshot("plop.png")
  },
  timeout = 15,
  load_testing = FALSE, 
  profile_code = TRUE, 
  check_reactivity = FALSE
)

Run shinyValidator::audit_app and have a look at the profiling tab.

Performance: Code profiling (3/3)

Performance: Load testing (1/2)

Goal: Check if the app can support concurrent user sessions.
Load testing might not work well at the moment...
shinyValidator::audit_app(
  {
    app$set_inputs(obs = 1000, timeout_ = 15 * 1000)
    app$get_screenshot("plop.png")
  },
  timeout = 15,
  load_testing = TRUE, 
  profile_code = TRUE, 
  check_reactivity = FALSE
)

Performance: Load testing (2/2)

Let’s add CI/CD

Pipeline output

shinyValidator pipeline on GitLab CI.

{shinyValidator} CI/CD file

In case you need to control branches triggering {shinyValidator}:

on:
  push:
    branches: [main, master, <CUSTOM_BRANCH>]
  pull_request:
    branches: [main, master, <CUSTOM_BRANCH>]

name: shinyValidator

%%{init: {'theme':'dark'}}%%
gitGraph
  commit
  commit
  branch develop
  checkout develop
  commit
  commit
  checkout main
  merge develop
  commit
  commit id: "Normal" tag: "v1.0.0"

If you have to change the R version, os, …:

strategy:
  fail-fast: false
  matrix:
    config:
      - {os: ubuntu-latest,   r: 'devel', http-user-agent: 'release'}
      - {os: ubuntu-latest,   r: 'release'}
      - {os: ubuntu-latest,   r: 'oldrel-1'}
- name: Lint code
  shell: Rscript {0}
  run: shinyValidator::lint_code()

- name: Audit app 🏥
  shell: Rscript {0}
  run: shinyValidator::audit_app()

- name: Deploy to GitHub pages 🚀
  if: github.event_name != 'pull_request'
  uses: JamesIves/github-pages-deploy-action@4.1.4
  with:
    clean: false
    branch: gh-pages
    folder: public

Example: disable other checks

Modify GitHub actions yaml file:

- name: Audit app 🏥
  shell: Rscript {0}
  run: shinyValidator::audit_app(load_testing = FALSE, profile_code = FALSE, check_reactivity = FALSE)

Run our first pipeline

  1. Make sure GitHub Pages is enabled.
  2. Commit and push the code to GitHub.
  3. You can follow the GitHub actions logs.
  4. When done, open the report an discuss results.
  5. Time to add some real things!

What’s next?

Test with your own app

Get a top notch app? Try to setup {shinyValidator} and run it.

Be patient

CI/CD and testing are not easy!

Pro tip: always run the code locally first with `audit_app` until no error.

Mastering Shiny UI

Outstanding User Interfaces with Shiny book cover.

Thank you!

Follow me on Fosstodon. @davidgranjon@fosstodon.org

Disclaimer

  • This presentation is based on publicly available information (including data relating to non-Novartis products or approaches).
  • The views presented are the views of the presenter, not necessarily those of Novartis.
  • These slides are indented for educational purpose only and for the personal use of the audience. These slides are not intended for wider distribution outside the intended purpose without presenter approval.
  • The content of this slide deck is accurate to the best of the presenter’s knowledge at the time of production.