Agenda



  1. About Me
  2. Data Science Issues
  3. Intro to Tidymodels
  4. Intro to Positron
  5. Live Demo

1. About Me

Professional Background

💼 Career
|- EY - Manager, Valuation, Modeling, & Economics
|- PG&E - Supervisor, Capital Recovery & Analysis
|- KPMG - Sr Manager, Economics & Valuation
|- Centene - Data Scientist III, Strategic Insights
|- Bloomreach - Sr Manager, Data Ops & Analytics
|- Centene - Lead Machine Learning Engineer

📚 Education
|- Georgia Tech - BS Management
|- UC Irvine - MS Business Analytics

Technical Skills Gained

💼 Career
|- EY - Microsoft Excel, Microsoft Access, Financial Modeling
|- PG&E - SQL, ODBC (connecting Access / Excel to EDWs)
|- KPMG - VBA scripting, Excel add-ins (Power Query and Power BI)
|- Centene - R, Web Apps, Package Dev, ML / AI, Cloud DS Tools
|- Bloomreach - GCP, Amazon Redshift, Linux, Google Workspaces
|- Centene - Docker, k8s, Databricks, Linux, bash, GenAI + LLMs

📚 Education
|- Georgia Tech - Finance, Business Management
|- UC Irvine - Business Analytics, Data Science

2. Data Science Issues

AI… It’s an umbrella

AI… It’s a growing umbrella

Classical ML… Another umbrella

ML Issues

  • How should I measure my baseline?

  • Which ML algorithm(s) should I use?

  • How should I split my training and testing data?

  • How should I evaluate my model fit?

  • Which performance evaluation metric(s) should I use?

  • Every ML package has a unique set functions and arguments

and the most annoying issue…

data

sucks!

Data sucks!

  • Missing data (NAs, nulls, etc.)
    • You may need to impute missing values
  • Categorical variables
    • ML algorithms might require dummy or one-hot encoding
  • Data type issues
    • For example, character strings and numbers in the same column
  • Too many variables

3. Intro to Tidymodels

What is Tidymodels?

What is Tidymodels?

  • A collection of R packages for reproducible ML

  • Follows tidy principles:
    • Consistent interface
    • Human-readable code
    • Reproducible workflows

  • Provides a unified syntax for ML

The ML Workflow (with Tidymodels)

Let’s Build a Binary Classification Model!



Our tidymodels workflow will follow the steps below:



  1. Load libraries & data
  2. Split data
  3. Create recipe
  4. Specify model
  5. Create workflow
  6. Train model
  7. Evaluate performance
  8. Visualize performance
  1. Setup for tuning
  2. Create CV folds
  3. Define the tuning grid
  4. Tune the model
  5. Visualize tuning results
  6. Select the best model
  7. Final fit
  8. Variable importance

The Big Picture: Logistic Regression

  • Logistic regression is a statistical method that uses the logistic (or “sigmoid”) function to model the probability of a binary outcome based on independent (or “predictor”) variables

  • Our modeling goal: Predict the survival of Titanic passengers:

    • The binary target variable is Survived
      • 0 = did not survive the Titanic 😢
      • 1 = survived the Titanic 🎉

The Big Picture: Logistic Regression

  • The logistic function maps to an S-shaped curve

  • In our Titanic example, the S-curve:

    • Shows how the probability of survival changes from low to high (or vice versa)
    • It illustrates that there is a gradual transition between outcomes rather than a sudden jump
  • Age is one of the independent variables we’ll use to predict a passenger’s likelihood to survive, so let’s see an S-curve mapping the probabilities of survival by Age

The Big Picture: Logistic Regression

The Big Picture: log reg with glm()

base R’s glm()

# Convert certain variables to factors to treat as categorical
titanic$Survived <- as.factor(titanic$Survived)
titanic$Pclass <- as.factor(titanic$Pclass)
titanic$Sex <- as.factor(titanic$Sex)

# Impute missing values for Age and Fare with their respective means
titanic$Age[is.na(titanic$Age)]   <- mean(titanic$Age, na.rm = TRUE)
titanic$Fare[is.na(titanic$Fare)] <- mean(titanic$Fare, na.rm = TRUE)

# Fit the logistic regression model
titanic_fit <- glm(Survived ~ Pclass + Sex + Age + SibSp + Parch + Fare, 
                   data = titanic, 
                   family = binomial)

tidymodels set_engine("glm")

# Convert relevant variables to factors
titanic <- titanic |> 
  mutate(Survived = factor(Survived))

# Setup recipe
titanic_recipe <- recipe(Survived ~ Pclass + Sex + Age + SibSp + Parch + Fare, 
                         data = titanic) |> 
  step_impute_mean(all_numeric_predictors()) |> 
  step_dummy(all_nominal_predictors())

# Setup model specification and workflow pipeline
titanic_spec <- logistic_reg() |> 
  set_engine("glm") |> 
  set_mode("classification")

titanic_workflow <- workflow() |> 
  add_recipe(titanic_recipe) |> 
  add_model(titanic_spec)

titanic_fit <- titanic_workflow |> 
  fit(data = titanic)

The Big Picture: log reg with {glmnet}

R’s {glmnet}

# Convert certain variables to factors to treat as categorical
titanic$Survived <- as.factor(titanic$Survived)
titanic$Pclass <- as.factor(titanic$Pclass)
titanic$Sex <- as.factor(titanic$Sex)

# Impute missing values for Age and Fare with their respective means
titanic$Age[is.na(titanic$Age)]   <- mean(titanic$Age, na.rm = TRUE)
titanic$Fare[is.na(titanic$Fare)] <- mean(titanic$Fare, na.rm = TRUE)

# Create a design matrix for predictors
### model.matrix() automatically handles factors by creating dummy variables
### Remove the intercept column (first column) with [,-1] because {glmnet} includes it by default
x <- model.matrix(Survived ~ Pclass + Sex + Age + SibSp + Parch + Fare,
                  data = titanic)[, -1]

# Create the response vector
y <- titanic$Survived

# Fit the logistic regression model using glmnet (for a range of lambda values)
glmnet_fit <- glmnet(x, y, family = "binomial")

tidymodels set_engine("glmnet")

# Convert relevant variables to factors
titanic <- titanic |> 
  mutate(Survived = factor(Survived))

# Setup recipe
titanic_recipe <- recipe(Survived ~ Pclass + Sex + Age + SibSp + Parch + Fare, 
                         data = titanic) |> 
  step_impute_mean(all_numeric_predictors()) |> 
  step_dummy(all_nominal_predictors())

# Setup model specification and workflow pipeline
titanic_spec <- logistic_reg() |> 
  set_engine("glmnet") |> 
  set_mode("classification")

titanic_workflow <- workflow() |> 
  add_recipe(titanic_recipe) |> 
  add_model(titanic_spec)

titanic_fit <- titanic_workflow |> 
  fit(data = titanic)

Step 0: Load Libraries & Data

# Load necessary packages
library(tidyverse)
library(tidymodels)
library(titanic)

# Load and prepare data
data(titanic_train)
titanic_data <- as_tibble(titanic_train) |> 
  mutate(Survived = factor(Survived, levels = c(0, 1)))

Step 0: Load Libraries & Data

# Load necessary packages
library(tidyverse)
library(tidymodels)
library(titanic)

# Load and prepare data
data(titanic_train)
titanic_data <- as_tibble(titanic_train) |> 
  mutate(Survived = factor(Survived, levels = c(0, 1)))  # Convert target variable to a factor level

# Take a look at the data
glimpse(titanic_data)
Rows: 891
Columns: 12
$ PassengerId <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,…
$ Survived    <fct> 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1…
$ Pclass      <int> 3, 1, 3, 1, 3, 3, 1, 3, 3, 2, 3, 1, 3, 3, 3, 2, 3, 2, 3, 3…
$ Name        <chr> "Braund, Mr. Owen Harris", "Cumings, Mrs. John Bradley (Fl…
$ Sex         <chr> "male", "female", "female", "female", "male", "male", "mal…
$ Age         <dbl> 22, 38, 26, 35, 35, NA, 54, 2, 27, 14, 4, 58, 20, 39, 14, …
$ SibSp       <int> 1, 1, 0, 1, 0, 0, 0, 3, 0, 1, 1, 0, 0, 1, 0, 0, 4, 0, 1, 0…
$ Parch       <int> 0, 0, 0, 0, 0, 0, 0, 1, 2, 0, 1, 0, 0, 5, 0, 0, 1, 0, 0, 0…
$ Ticket      <chr> "A/5 21171", "PC 17599", "STON/O2. 3101282", "113803", "37…
$ Fare        <dbl> 7.2500, 71.2833, 7.9250, 53.1000, 8.0500, 8.4583, 51.8625,…
$ Cabin       <chr> "", "C85", "", "C123", "", "", "E46", "", "", "", "G6", "C…
$ Embarked    <chr> "S", "C", "S", "S", "S", "Q", "S", "S", "S", "C", "S", "S"…

Step 1a: Split the Data

# Split the data (create a singular object containing training and testing splits)
set.seed(123)
titanic_split <- initial_split(
  data = titanic_data, 
  prop = 0.75, 
  strata = Survived # stratify split by Survived column
)

# Create training and testing datasets
train_data <- training(titanic_split)
test_data <- testing(titanic_split)

Step 1b: Take a glimpse() at the splits

# Use dplyr::glimpse() to review the training split
glimpse(train_data)
Rows: 667
Columns: 12
$ PassengerId <int> 6, 7, 8, 13, 14, 15, 19, 21, 25, 27, 28, 31, 36, 38, 41, 4…
$ Survived    <fct> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
$ Pclass      <int> 3, 1, 3, 3, 3, 3, 3, 2, 3, 3, 1, 1, 1, 3, 3, 2, 3, 3, 3, 3…
$ Name        <chr> "Moran, Mr. James", "McCarthy, Mr. Timothy J", "Palsson, M…
$ Sex         <chr> "male", "male", "male", "male", "male", "female", "female"…
$ Age         <dbl> NA, 54, 2, 20, 39, 14, 31, 35, 8, NA, 19, 40, 42, 21, 40, …
$ SibSp       <int> 0, 0, 3, 0, 1, 0, 1, 0, 3, 0, 3, 0, 1, 0, 1, 1, 0, 0, 1, 2…
$ Parch       <int> 0, 0, 1, 0, 5, 0, 0, 0, 1, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0…
$ Ticket      <chr> "330877", "17463", "349909", "A/5. 2151", "347082", "35040…
$ Fare        <dbl> 8.4583, 51.8625, 21.0750, 8.0500, 31.2750, 7.8542, 18.0000…
$ Cabin       <chr> "", "E46", "", "", "", "", "", "", "", "", "C23 C25 C27", …
$ Embarked    <chr> "Q", "S", "S", "S", "S", "S", "S", "S", "S", "C", "S", "C"…

Step 1c: Check the Stratification

# Review the proportion of 0s and 1s in each split
train_data |> 
  summarise(Train_Rows = n(), .by = Survived) |> 
  mutate(Train_Percent = Train_Rows / sum(Train_Rows)) |> 
  left_join(test_data |> 
    summarise(Test_Rows = n(), .by = Survived) |> 
    mutate(Test_Percent = Test_Rows / sum(Test_Rows)),
  join_by(Survived))
# A tibble: 2 × 5
  Survived Train_Rows Train_Percent Test_Rows Test_Percent
  <fct>         <int>         <dbl>     <int>        <dbl>
1 0               411         0.616       138        0.616
2 1               256         0.384        86        0.384

Step 2: Create a Modeling Recipe

# Create a pre-processing recipe
titanic_recipe <- recipe(Survived ~ Pclass + Sex + Age + SibSp + Parch + Fare, 
                         data = train_data) |>
  step_impute_median(all_numeric_predictors()) |> # Handle missing values in Age
  step_dummy(all_nominal_predictors()) |>         # Convert categorical variables to dummy variables
  step_normalize(all_numeric_predictors())        # Normalize numeric predictors

titanic_recipe

Step 3: Specify the Model

With {parsnip}, you have a unified interface for ML:

# Specify a logistic regression model
model_spec <- logistic_reg() |>
  set_engine("glm") |>
  set_mode("classification")

model_spec
Logistic Regression Model Specification (classification)

Computational engine: glm 

Step 4: Create a Workflow

# Create a workflow
titanic_workflow <- workflow() |>
  add_recipe(titanic_recipe) |>
  add_model(model_spec)

titanic_workflow
══ Workflow ════════════════════════════════════════════════════════════════════
Preprocessor: Recipe
Model: logistic_reg()

── Preprocessor ────────────────────────────────────────────────────────────────
3 Recipe Steps

• step_impute_median()
• step_dummy()
• step_normalize()

── Model ───────────────────────────────────────────────────────────────────────
Logistic Regression Model Specification (classification)

Computational engine: glm 

Step 5: Train the Model

# Fit the workflow to the training data
titanic_fit <- titanic_workflow |> 
  last_fit(
    split = titanic_split, 
    metrics = metric_set(roc_auc, accuracy)
  )

titanic_fit
# Resampling results
# Manual resampling 
# A tibble: 1 × 6
  splits            id               .metrics .notes   .predictions .workflow 
  <list>            <chr>            <list>   <list>   <list>       <list>    
1 <split [667/224]> train/test split <tibble> <tibble> <tibble>     <workflow>

Step 6: Evaluate the Model

# Collect metrics from the `last_fit()` model object
collect_metrics(titanic_fit)
# A tibble: 2 × 4
  .metric  .estimator .estimate .config             
  <chr>    <chr>          <dbl> <chr>               
1 accuracy binary         0.804 Preprocessor1_Model1
2 roc_auc  binary         0.878 Preprocessor1_Model1

Step 7: Visualize Model Performance

# Plot confusion matrix
titanic_fit |>
  collect_predictions() |> 
  conf_mat(truth = Survived, estimate = .pred_class) |> 
  autoplot(type = "heatmap")

Step 8: Hyperparameter Tuning

Let’s try modeling with {glmnet} and tuning

# Create a tunable model specification
model_spec_tune <- logistic_reg(
  penalty = tune(),
  mixture = tune()
) |>
  set_engine("glmnet") |>
  set_mode("classification")

# Create a tuning workflow
tune_workflow <- workflow() |>
  add_recipe(titanic_recipe) |>
  add_model(model_spec_tune)

Step 9: Create Cross-Validation Folds

# Create cross-validation folds
set.seed(234)
titanic_folds <- vfold_cv(train_data, v = 5, strata = Survived)

titanic_folds
#  5-fold cross-validation using stratification 
# A tibble: 5 × 2
  splits            id   
  <list>            <chr>
1 <split [532/135]> Fold1
2 <split [534/133]> Fold2
3 <split [534/133]> Fold3
4 <split [534/133]> Fold4
5 <split [534/133]> Fold5

Step 9: Create Cross-Validation Folds

Image: © Posit Software, PBC.

Step 10: Define the Tuning Grid

# Create a grid of hyperparameters to try
log_grid <- grid_space_filling(
  penalty(),
  mixture(),
  size = 10
)

log_grid
# A tibble: 10 × 2
         penalty mixture
           <dbl>   <dbl>
 1 0.0000000001    0.333
 2 0.00000000129   0.778
 3 0.0000000167    0    
 4 0.000000215     0.444
 5 0.00000278      0.889
 6 0.0000359       0.111
 7 0.000464        0.556
 8 0.00599         1    
 9 0.0774          0.222
10 1               0.667

Step 11: Tune the Model

# Tune the model
set.seed(345)
log_tuning_results <- tune_grid(
  tune_workflow,
  resamples = titanic_folds,
  grid = log_grid,
  metrics = metric_set(roc_auc, accuracy)
)

log_tuning_results
# Tuning results
# 5-fold cross-validation using stratification 
# A tibble: 5 × 4
  splits            id    .metrics          .notes          
  <list>            <chr> <list>            <list>          
1 <split [532/135]> Fold1 <tibble [20 × 6]> <tibble [0 × 3]>
2 <split [534/133]> Fold2 <tibble [20 × 6]> <tibble [0 × 3]>
3 <split [534/133]> Fold3 <tibble [20 × 6]> <tibble [0 × 3]>
4 <split [534/133]> Fold4 <tibble [20 × 6]> <tibble [0 × 3]>
5 <split [534/133]> Fold5 <tibble [20 × 6]> <tibble [0 × 3]>

Step 12: Visualize Tuning Results

# Show the best models
show_best(log_tuning_results, metric = "roc_auc")
# A tibble: 5 × 8
        penalty mixture .metric .estimator  mean     n std_err .config          
          <dbl>   <dbl> <chr>   <chr>      <dbl> <int>   <dbl> <chr>            
1 0.0000000167    0     roc_auc binary     0.843     5  0.0121 Preprocessor1_Mo…
2 0.00599         1     roc_auc binary     0.842     5  0.0106 Preprocessor1_Mo…
3 0.00000000129   0.778 roc_auc binary     0.841     5  0.0112 Preprocessor1_Mo…
4 0.0000359       0.111 roc_auc binary     0.841     5  0.0111 Preprocessor1_Mo…
5 0.00000278      0.889 roc_auc binary     0.841     5  0.0112 Preprocessor1_Mo…

Step 12: Visualize Tuning Results (cont.)

# Create a visualization of the tuning results
autoplot(log_tuning_results)

Step 13: Select the Best Model

# Select the best hyperparameters
best_params <- select_best(log_tuning_results, metric = "roc_auc")

best_params
# A tibble: 1 × 3
       penalty mixture .config              
         <dbl>   <dbl> <chr>                
1 0.0000000167       0 Preprocessor1_Model01
# Finalize the workflow with the best parameters
final_workflow <- finalize_workflow(tune_workflow, best_params)

Step 14: Final Fit

# Fit the final model to the entire training set and evaluate on test set
final_fit <- final_workflow |>
  last_fit(titanic_split)

# Get the metrics
collect_metrics(final_fit)
# A tibble: 3 × 4
  .metric     .estimator .estimate .config             
  <chr>       <chr>          <dbl> <chr>               
1 accuracy    binary         0.804 Preprocessor1_Model1
2 roc_auc     binary         0.876 Preprocessor1_Model1
3 brier_class binary         0.136 Preprocessor1_Model1

Step 15: Variable Importance

# Extract the fitted workflow
fitted_workflow <- extract_workflow(final_fit)

# Extract the fitted model
fitted_model <- extract_fit_parsnip(fitted_workflow)

# Calculate variable importance
vip::vip(fitted_model)

Tidymodels Benefits

  • Consistent Interface: Same syntax across different ML algorithms
  • Modularity: Each step is a separate function that can be modified
  • Reproducibility: Workflows capture the entire modeling process
  • Extensibility: Easy to add new steps or algorithms
  • Visualization: Built-in tools for visualizing results
  • Tuning: Streamlined process for hyperparameter optimization

4. Intro to Positron

A History of IDEs & Notebooks: RStudio


Image: © Posit Software, PBC.

A History of IDEs & Notebooks: RStudio

Image: © Posit Software, PBC.

A History of IDEs & Notebooks: VS Code

A History of IDEs & Notebooks: Jupyter

IDE Similarities & Differences

  • RStudio IDE’s “always on” panes welcomed R users seeking a data-analysis-first experience

  • For Python users, RStudio felt too R-centric and other tools worked just fine including VS Code, Jupyter Notebooks, PyCharm, etc.

  • There are many programming languages that can be used for data analysis, but Python and R are the de facto standards for data science

  • There lacks a unifying IDE for Python and R users

Introducing Positron, it looks familiar!

About Positron

  • What is Positron? From Posit’s getting started docs:
    • A next-generation data science IDE built by Posit PBC
    • An extensible, polyglot tool for writing code and exploring data
    • A familiar environment for reproducible authoring and publishing
  • Positron is a tailor-made IDE for data science built on top of Code OSS that can be used with any combination of programming languages

VS Code OSS w/ RStudio panes!

Prerequisites


  • Windows prereqs:

    • Ensure the latest Visual C++ Redistributable is installed

    • If you’re an R user package developer, note that Positron doesn’t currently bundle Rtools.

    • For reference, RTools contains the required compilers needed to build R packages from source code on Windows

Prerequisites


  • Python prereqs:

    • The Posit team recommends pyenv to manage Python versions, and Python versions from 3.8 to 3.12 are actively supported on Positron

    • For Linux users, install the SQLite system libraries (sqlite-devel or libsqlite3-dev) ahead of time so pyenv can build your Python version(s) of choice

    • Positron communicates with Python via the ipykernel

    • If you’re using venv or conda to manage your Python projects, you can install ipykernal manually as follows: python3 -m pip install ipykernel

Prerequisites


  • R prereqs:

    • Positron requires R 4.2 or higher - To install R, follow the instructions for your OS at https://cloud.r-project.org

    • If you’d like to have multiple R installations, rig is a great tool that works on macOS, Windows and Linux, and works well with Positron

Interpreter Selector


  • When Positron starts for the first time in a new workspace (or project directory), it will start Python and/or R depending on your workspace characteristics

  • In subsequent runs, Positron will start the same interpreter(s) that was running the last time that you used that workspace

  • You can start, stop, and switch interpreters from the interpreter selector

Key Bindings & Command Palette

  • Key bindings trigger actions by pressing a combination of keys
  • The key binding Cmd/Ctrl+Shift+P will bring up Positron’s Command Palette
  • This lets you search and execute actions without needing to remember the key binding


Global Keyboard Shortcuts


Shortcut Description
Cmd/Ctrl+Enter Run the selected code in the editor; if no code is selected, run the current statement
Cmd/Ctrl+Shift+0 Restart the interpreter currently open in the Console
Cmd/Ctrl+Shift+Enter Run the file open in the editor (using e.g. source() or %run)
F1 Show contextual help for the topic under the cursor
Cmd/Ctrl+K, Cmd/Ctrl+R Show contextual help for the topic under the cursor (alternate binding)
Cmd/Ctrl+K, F Focus the Console
Ctrl+L Clear the Console

R Keyboard Shortcuts


Shortcut Description
Cmd/Ctrl+Shift+M Insert the pipe operator (|> or %>%)
Alt+- Insert the assignment operator (<-)
Cmd/Ctrl+Shift+L Load the current R package, if any
Cmd/Ctrl+Shift+B Build and install the current R package, if any
Cmd/Ctrl+Shift+T Test the current R package, if any
Cmd/Ctrl+Shift+E Check the current R package, if any
Cmd/Ctrl+Shift+D Document the current R package, if any

RStudio Keymap


If you’re an experienced RStudio user, you can easily set the RStudio keybindings in the Positron settings:

  • Open Positron’s settings in the UI or the keystroke Cmd/Ctrl+,
  • Search for “keymap”, or navigate to Extensions > RStudio Keymap
  • Check the “Enable RStudio key mappings for Positron” checkbox

Data Explorer Overview


  • The new Data Explorer allows for interactive exploration of various types of dataframes using Python (pandas, polars) or R (data.frame, tibble, data.table, polars)

  • The Data Explorer has three primary components
    • Data grid: Spreadsheet-like display of the data with sorting
    • Summary panel: Column name, type and missing data percentage for each column
    • Filter bar: Ephemeral filters for specific columns

Data Explorer Overview


  • To use, navigate to the Variables Pane and click on the Data Explorer icon:

Data Explorer Overview

Data Explorer’s Data Grid


  • The data grid is the primary display and scales efficiently with large in-memory datasets up to millions of rows or columns

  • At the top right of each column, there is a context menu that controls sorting and filtering in the selected column

Data Explorer’s Summary Panel


  • Displays a vertical scrolling list of all columns in the data

  • It displays a sparkline histogram of that column’s data, displays the amount of missing data, and shows some summary statistics about that column

  • Double clicking on a column name will bring the column into focus in the data grid

Data Explorer’s Filter Bar


  • The filter bar has controls to add filters, show/hide existing filters, or clear filters

  • Clicking the + button quickly adds a new filter

  • The status bar at the bottom of the Data Explorer also displays the percentage and number of remaining rows relative to the original total after applying a filter

Connections Pane


- Explore database connections established with ODBC drivers or packages

- For Python users, the sqlite3 and SQLAlchemy packages are supported

- For R users, the odbc, sparklyr, bigrquery, and more packages are supported

Interactive Apps


- Instead of running apps from a Terminal, Positron lets you run supported apps by clicking the Play button in Editor Actions

- Supported apps include the following: Shiny, Dash, FastAPI, Flask, Gradio, and Streamlit

- You can also start apps in Debug mode

Learn More about Positron

5. Live Demo

Let’s see Positron in action!

Thank you! 🤍

questions?



Connect with me!