About Mental Health Website
Introdunce
Mental Health Website is a public, open-source resource for the mental health community that anyone can contribute to. We have a small core team dedicated to maintaining and developing the site with contributions from thousands of community members across the globe.
Nobody from Mental Health Website will ever contact you. Do not respond.
A Note on Mental Health
It is common for individuals to confuse terms within the mental health landscape, which can lead to poor mental models about how mental health works. Here is a brief explanation to clarify these concepts:
Mental Health Basics
Mental health is a public concern, a personal journey, and an open-source collaborative effort -- operated, governed, managed, and owned by a global community of tens of thousands of professionals, researchers, supporters, and users.
Our Team
GALIH RIDHO UTOMO | Developer | g4lihru@students.unnes.ac.id | |
Ana Maulida | Writing Ilmiah | anamaulida@students.unnes.ac.id |
Features
Our website provides various tools and resources to help you monitor and improve your mental health:
- Real-time emotion detection using facial analysis
- Mental health tracking with sensor data integration
- Personalized advice based on your emotional state
- Community support and resources for mental health awareness
Code Examples
Here is the complete code snippet from our face-analysis.js
that you can use to integrate emotion detection into your own projects. Each function is explained in detail below, along with mathematical formulas to represent the logic:
1. Hash Function (djb2Hash)
This function generates a unique hash value for a given string. It is based on the DJB2 algorithm, which is a simple and efficient hashing algorithm.
// djb2Hash: Generates a hash value for a string
function djb2Hash(str) {
let hash = 5381; // Initial hash value
for (let i = 0; i>> 0;
}
Mathematical Representation:
\[ h_0 = 5381 \\ h_i = ((h_{i-1} \times 33) \oplus c_i) \mod 2^{32}, \quad \text{for } i = 1,2,\ldots,n \]
Explanation:
- \(h_0 = 5381\): Initial hash value
- \(h_i\): Hash value after processing the \(i\)-th character
- \(c_i\): ASCII/Unicode value of the \(i\)-th character (obtained using
charCodeAt(i)
) - \(\oplus\): Bitwise XOR operation
- \(\mod 2^{32}\): Conversion to unsigned 32-bit integer (equivalent to
>>> 0
in JavaScript)
2. Generate Face ID (generateFaceID)
This function converts facial landmarks into a unique hash string. It normalizes the landmark coordinates and uses the djb2Hash
function to generate a unique FaceID.
// generateFaceID: Converts facial landmarks to a unique hash
function generateFaceID(landmarks) {
// Normalize landmark coordinates with 4 decimal precision
let hashString = landmarks.map(lm =>
lm.x.toFixed(4) +
lm.y.toFixed(4) +
lm.z.toFixed(4)
).join('');
return djb2Hash(hashString).toString(16); // Convert to hexadecimal
}
Mathematical Representation:
\[ S = \text{concatenate}_{j=1}^{n} \left( \text{round}(L_j.x, 4) \cdot \text{round}(L_j.y, 4) \cdot \text{round}(L_j.z, 4) \right) \\ \text{FaceID} = \text{hex}(\text{djb2Hash}(S)) \]
Explanation:
- \(L_j\): The \(j\)-th facial landmark containing \(x\), \(y\), and \(z\) coordinates
- \(\text{round}(v, 4)\): Rounding the value \(v\) to 4 decimal places (equivalent to
toFixed(4)
) - \(S\): Concatenated string of all normalized landmark coordinates
- \(\text{djb2Hash}(S)\): Application of the hash function to the string \(S\)
- \(\text{hex}(v)\): Conversion of value \(v\) to hexadecimal representation (equivalent to
toString(16)
)
3. Analyze Emotions (analyzeEmotions)
This function calculates the intensity of different emotions (happy, sad, angry, surprised, neutral) based on facial landmarks.
// analyzeEmotions: Calculates emotion intensities
function analyzeEmotions(landmarks) {
const emotions = {
happy: calculateHappiness(landmarks),
sad: calculateSadness(landmarks),
angry: calculateAnger(landmarks),
surprised: calculateSurprise(landmarks),
neutral: calculateNeutral(landmarks)
};
// Normalize to percentages
const total = Object.values(emotions).reduce((a, b) => a + b, 0);
Object.keys(emotions).forEach(key => {
emotions[key] = (emotions[key] / total * 100).toFixed(1);
});
return emotions;
}
Mathematical Representation:
\[ E_{raw} = \begin{pmatrix} E_{happy} \\ E_{sad} \\ E_{angry} \\ E_{surprised} \\ E_{neutral} \end{pmatrix} = \begin{pmatrix} \text{calculateHappiness}(L) \\ \text{calculateSadness}(L) \\ \text{calculateAnger}(L) \\ \text{calculateSurprise}(L) \\ \text{calculateNeutral}(L) \end{pmatrix} \]
\[ \Sigma = \sum_{i \in \{emotions\}} E_{raw, i} \]
\[ E_{normalized, i} = \text{round}\left(\frac{E_{raw, i}}{\Sigma} \times 100, 1\right) \quad \text{for each } i \in \{emotions\} \]
Explanation:
- \(L\): Set of facial landmarks
- \(E_{raw}\): Vector of raw emotion intensity values
- \(\Sigma\): Sum of all raw emotion intensities
- \(E_{normalized, i}\): Normalized percentage for emotion \(i\) (rounded to 1 decimal place)
- \(\text{round}(v, 1)\): Rounding the value \(v\) to 1 decimal place (equivalent to
toFixed(1)
)
4. Calculate Happiness (calculateHappiness)
This function calculates the intensity of happiness based on facial landmarks, specifically focusing on lip and cheek movements.
// calculateHappiness: Computes happiness intensity
function calculateHappiness(landmarks) {
// AU12 - Lip Corner Puller
const lipCornerLeft = landmarks[61];
const lipCornerRight = landmarks[291];
const lipStretch = Math.hypot(
lipCornerRight.x - lipCornerLeft.x,
lipCornerRight.y - lipCornerLeft.y
);
// AU6 - Cheek Raiser
const cheekLeft = landmarks[123];
const eyeLeft = landmarks[159];
const cheekRaiseLeft = eyeLeft.y - cheekLeft.y;
if (!landmarks[61] || !landmarks[291] || !landmarks[123] || !landmarks[159]) {
return 0; // Return 0 if landmarks are missing
}
return Math.min(1, lipStretch * 2 + cheekRaiseLeft * 3);
}
Mathematical Representation:
\[ D_{lip} = \sqrt{(L_{291}.x - L_{61}.x)^2 + (L_{291}.y - L_{61}.y)^2} \]
\[ D_{cheek} = L_{159}.y - L_{123}.y \]
\[ H = \min(1, 2D_{lip} + 3D_{cheek}) \]
Explanation:
- \(L_i\): Landmark at index \(i\) in the facial landmarks array
- \(D_{lip}\): Euclidean distance between left and right lip corners (Action Unit 12)
- \(D_{cheek}\): Vertical distance between cheek and eye (Action Unit 6)
- \(H\): Happiness intensity score, bounded to [0,1]
- Coefficients 2 and 3 are weighting factors to balance the influence of each feature
5. Calculate Sadness (calculateSadness)
This function calculates the intensity of sadness based on facial landmarks, focusing on lip depression and brow movement.
// calculateSadness: Computes sadness intensity
function calculateSadness(landmarks) {
// AU15 - Lip Corner Depressor
const lipCornerLeft = landmarks[61];
const lipBottom = landmarks[17];
const lipDepression = lipBottom.y - lipCornerLeft.y;
// AU1 - Inner Brow Raiser
const browInnerLeft = landmarks[105];
const browInnerRight = landmarks[334];
const browRaise = (browInnerLeft.y + browInnerRight.y) / 2;
return Math.min(1, lipDepression * 1.5 + browRaise * 2);
}
Mathematical Representation:
\[ D_{lip} = L_{17}.y - L_{61}.y \]
\[ D_{brow} = \frac{L_{105}.y + L_{334}.y}{2} \]
\[ S = \min(1, 1.5D_{lip} + 2D_{brow}) \]
Explanation:
- \(L_i\): Landmark at index \(i\) in the facial landmarks array
- \(D_{lip}\): Vertical distance between lip corner and bottom lip (Action Unit 15)
- \(D_{brow}\): Average height of inner brows (Action Unit 1)
- \(S\): Sadness intensity score, bounded to [0,1]
- Coefficients 1.5 and 2 are weighting factors to balance the influence of each feature
6. Calculate Anger (calculateAnger)
This function calculates the intensity of anger based on facial landmarks, focusing on brow lowering and lip compression.
// calculateAnger: Computes anger intensity
function calculateAnger(landmarks) {
// AU4 - Brow Lowerer
const browOuterLeft = landmarks[46];
const browInnerLeft = landmarks[105];
const browLowerLeft = browInnerLeft.y - browOuterLeft.y;
// AU23 - Lip Tightener
const lipTop = landmarks[0];
const lipBottom = landmarks[17];
const lipCompression = lipBottom.y - lipTop.y;
return Math.min(1, browLowerLeft * 2 + lipCompression * 1.2);
}
Mathematical Representation:
\[ D_{brow} = L_{105}.y - L_{46}.y \]
\[ D_{lip} = L_{17}.y - L_{0}.y \]
\[ A = \min(1, 2D_{brow} + 1.2D_{lip}) \]
Explanation:
- \(L_i\): Landmark at index \(i\) in the facial landmarks array
- \(D_{brow}\): Vertical distance between inner and outer brow (Action Unit 4)
- \(D_{lip}\): Vertical distance between top and bottom lip (Action Unit 23)
- \(A\): Anger intensity score, bounded to [0,1]
- Coefficients 2 and 1.2 are weighting factors to balance the influence of each feature
7. Calculate Surprise (calculateSurprise)
This function calculates the intensity of surprise based on facial landmarks, focusing on eye openness and jaw drop.
// calculateSurprise: Computes surprise intensity
function calculateSurprise(landmarks) {
// AU5 - Upper Lid Raiser
const eyelidLeft = landmarks[159];
const eyeLeft = landmarks[145];
const eyeOpenness = eyeLeft.y - eyelidLeft.y;
// AU26 - Jaw Drop
const chin = landmarks[152];
const nose = landmarks[4];
const jawDrop = chin.y - nose.y;
return Math.min(1, eyeOpenness * 2 + jawDrop * 0.8);
}
Mathematical Representation:
\[ D_{eye} = L_{145}.y - L_{159}.y \]
\[ D_{jaw} = L_{152}.y - L_{4}.y \]
\[ P = \min(1, 2D_{eye} + 0.8D_{jaw}) \]
Explanation:
- \(L_i\): Landmark at index \(i\) in the facial landmarks array
- \(D_{eye}\): Vertical distance between eyelid and eye (Action Unit 5)
- \(D_{jaw}\): Vertical distance between chin and nose (Action Unit 26)
- \(P\): Surprise intensity score, bounded to [0,1]
- Coefficients 2 and 0.8 are weighting factors to balance the influence of each feature
8. Calculate Neutral (calculateNeutral)
This function calculates the intensity of a neutral expression by measuring deviations from a neutral facial position.
// calculateNeutral: Computes neutral intensity
function calculateNeutral(landmarks) {
let deviation = 0;
const neutralFeatures = [
[152, 4], // Chin to nose
[61, 291], // Lip corners
[105, 334] // Brows
];
neutralFeatures.forEach(([i1, i2]) => {
deviation += Math.hypot(
landmarks[i1].x - landmarks[i2].x,
landmarks[i1].y - landmarks[i2].y
);
});
return Math.max(0, 1 - deviation * 2);
}
Mathematical Representation:
\[ \delta = \sum_{(i,j) \in F} \sqrt{(L_i.x - L_j.x)^2 + (L_i.y - L_j.y)^2} \]
where \(F = \{(152, 4), (61, 291), (105, 334)\}\) represents the set of feature pairs.
\[ N = \max(0, 1 - 2\delta) \]
Explanation:
- \(L_i\): Landmark at index \(i\) in the facial landmarks array
- \(F\): Set of feature pairs used to measure facial symmetry and stability
- \(\delta\): Total deviation from neutral positions, calculated as the sum of Euclidean distances between landmark pairs
- \(N\): Neutral expression intensity, bounded to [0,1]
- The coefficient 2 is a scaling factor to ensure appropriate range for the result
Code Explanation for Mental Health Monitoring System
Below is a detailed explanation of the code for the mental health monitoring system using ESP32 with GSR, MAX30102, and BH1750 sensors. Each function is explained in detail with relevant mathematical formulations:
1. Configuration and Initialization
This section sets up the WiFi connection, GitHub settings for data storage, and sensor initialization.
#include "WiFi.h"
#include "HTTPClient.h"
#include "ArduinoJson.h"
#include "Wire.h"
#include "MAX30105.h"
#include "heartRate.h"
#include "BH1750.h"
#include "Base64.h"
#include "time.h"
// WiFi Configuration
const char* ssid = "Brickhouse 1";
const char* password = "AVOLUTION";
// GitHub Configuration
const char* gsrUrl = "https://api.github.com/repos/4211421036/MentalHealth/contents/GSR.json";
const char* maxUrl = "https://api.github.com/repos/4211421036/MentalHealth/contents/MAX30102.json";
const char* bhUrl = "https://api.github.com/repos/4211421036/MentalHealth/contents/BH1750.json";
const char* token = "Bearer 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";
// Time Configuration
const char* ntpServer = "pool.ntp.org";
const long gmtOffset_sec = 25200; // GMT+7
const int daylightOffset_sec = 0;
// Sensor Objects
MAX30105 particleSensor;
BH1750 lightMeter;
// Pin Config
const int GSR_PIN = 34;
const int WINDOW_SIZE = 5; // For moving average GSR (Boucsein, 2012)
// Sensor Variables
float eda_buffer[WINDOW_SIZE] = {0};
int buffer_index = 0;
float beatsPerMinute, beatAvg;
byte rates[4], rateSpot = 0;
long lastBeat = 0;
Explanation:
- The libraries used include WiFi connection, HTTP client, JSON parser, I2C communication, and specific libraries for the MAX30105 and BH1750 sensors.
- WiFi configuration for internet connection.
- GitHub API configuration for cloud data storage.
- Initialization of sensor objects and pin configuration.
- WINDOW_SIZE = 5 is used for the moving average filter for GSR based on Boucsein (2012) recommendations.
- Buffer variables to store and process sensor data.
2. Function getCurrentSHA
This function retrieves the SHA value of the file on GitHub for update purposes.
String getCurrentSHA(const char* url) {
HTTPClient http;
String sha = "";
http.begin(url);
http.addHeader("Authorization", token);
int httpCode = http.GET();
if (httpCode == HTTP_CODE_OK) {
DynamicJsonDocument doc(1024);
deserializeJson(doc, http.getString());
sha = doc["sha"].as();
}
http.end();
return sha;
}
Process Flow:
\[ \text{getCurrentSHA}: \text{URL} \rightarrow \text{SHA} \]
Explanation:
- Establishes an HTTP connection to the GitHub API to obtain file metadata.
- Sends a GET request with authorization token.
- Extracts the SHA value from the JSON response.
- SHA is a unique hash used by GitHub for version control.
3. Function uploadToGitHub
This function uploads JSON data to GitHub with Base64 encoding.
void uploadToGitHub(DynamicJsonDocument doc, const char* url, String& lastSHA) {
HTTPClient http;
http.begin(url);
http.addHeader("Content-Type", "application/json");
http.addHeader("Authorization", token);
String jsonStr;
serializeJson(doc, jsonStr);
String encodedData = base64::encode(jsonStr);
String payload = "{\"message\":\"Update data\",\"content\":\"" + encodedData + "\",\"sha\":\"" + last SHA + "\"}";
int httpCode = http.PUT(payload);
if (httpCode == HTTP_CODE_OK) {
DynamicJsonDocument respDoc(1024);
deserializeJson(respDoc, http.getString());
lastSHA = respDoc["content"]["sha"].as();
}
http.end();
}
Process Flow:
\[ \text{JSON} \xrightarrow{\text{serialize}} \text{String} \xrightarrow{\text{Base64}} \text{EncodedString} \xrightarrow{\text{HTTP PUT}} \text{GitHub} \]
Explanation:
- Takes a JSON document, URL endpoint, and a reference to lastSHA.
- Converts the JSON document to a string.
- Encodes the string with Base64 (required by GitHub API).
- Creates a payload for the PUT request containing the encoded data and previous SHA.
- Updates lastSHA with the new SHA value from the GitHub response.
4. Function getTimestamp
This function retrieves the current timestamp from the NTP server.
String getTimestamp() {
struct tm timeinfo;
if(!getLocalTime(&timeinfo)) return "";
char buffer[20];
strftime(buffer, sizeof(buffer), "%Y-%m-%d %H:%M:%S", &timeinfo);
return String(buffer);
}
Output Format:
\[ \text{Timestamp} = \text{YYYY-MM-DD HH:MM:SS} \]
Explanation:
- Uses NTP (Network Time Protocol) to obtain accurate time.
- Converts time to a readable string format: year-month-day hour:minute:second.
- Important for providing consistent timestamps on sensor data.
5. Function edaMovingAverage
This function implements a moving average filter for EDA (Electrodermal Activity) signal based on Boucsein (2012) methodology.
// Moving average function for EDA tonic (Boucsein, 2012)
float edaMovingAverage(float newVal) {
eda_buffer[buffer_index] = newVal;
buffer_index = (buffer_index + 1) % WINDOW_SIZE;
float sum = 0;
return sum/WINDOW_SIZE;
}
Mathematical Formulation:
\[ B[j] = x_{\text{new}} \]
\[ j = (j + 1) \mod W \]
\[ \bar{x} = \frac{1}{W} \sum_{i=0}^{W-1} B[i] \]
Explanation:
- \(B\): Circular buffer to store the latest EDA values.
- \(j\): Current buffer index.
- \(W\): Window size (WINDOW_SIZE = 5).
- \(x_{\text{new}}\): New EDA value.
- \(\bar{x}\): Average EDA value (EDA tonic).
- This method, based on Boucsein (2012), helps separate the tonic (baseline) component from the EDA signal.
6. Function setup
This function initializes the connection and sensors when the system is powered on.
void setup() {
Serial.begin(115200);
WiFi.begin(ssid, password);
int wifiTimeout = 0;
while (WiFi.status() != WL_CONNECTED && wifiTimeout) {
delay(500);
Serial.print(".");
wifiTimeout++;
}
if (WiFi.status() == WL_CONNECTED) {
Serial.println("\nConnected to WiFi");
configTime(gmtOffset_sec, daylightOffset_sec, ntpServer);
} else {
Serial.println("\nFailed to connect WiFi");
ESP.restart();
}
// Initialize Sensors
Wire.begin();
if (!particleSensor.begin(Wire, I2C_SPEED_FAST)) {
Serial.println("MAX30102 not found");
}
particleSensor.setup();
lightMeter.begin();
configTime(gmtOffset_sec, daylightOffset_sec, ntpServer);
}
Process Flow:
\[ \text{Initialize Serial} \rightarrow \text{WiFi Connection} \rightarrow \text{NTP Configuration} \rightarrow \text{Sensor Initialization} \]
Explanation:
- Initializes serial communication with a baud rate of 115200.
- Attempts to connect to WiFi with a timeout of 10 seconds (20 x 500ms).
- If the connection fails, the system will restart.
- Initializes the I2C bus (Wire) for communication with the sensors.
- Configures the MAX30102 and BH1750 sensors.
- Sets up NTP for time synchronization.
7. Function loop
The main function that runs repeatedly to read sensors, process data, and upload it to GitHub.
void loop() {
if (WiFi.status() != WL_CONNECTED) {
Serial.println("WiFi disconnected. Reconnecting...");
WiFi.reconnect();
delay(5000);
return;
}
static unsigned long lastUpload = 0;
// Read all sensors
float gsrRaw = analogRead(GSR_PIN);
float edaTonic = edaMovingAverage(gsrRaw);
float edaPhasic = gsrRaw - edaTonic; // Boucsein (2012)
long irValue = particleSensor.getIR();
if(irValue > 50000 && checkForBeat(irValue)) { // Shaffer et al. (2014)
long delta = millis() - lastBeat;
lastBeat = millis();
beatsPerMinute = 60 / (delta / 1000.0);
if(beatsPerMinute) {
rates[rateSpot++] = (byte)beatsPerMinute;
rateSpot %= 4;
beatAvg = 0;
for(byte x=0; x; x++) beatAvg += rates[x];
beatAvg /= 4;
}
}
float lux = lightMeter.readLightLevel(); // Golden et al. (2005)
if(millis() - lastUpload >= 10000) { // Upload every 10 seconds
String timestamp = getTimestamp();
// Create JSON document for each sensor
DynamicJsonDocument gsrDoc(256);
gsrDoc["sensor"] = "GSR";
gsrDoc["tonic"] = edaTonic;
gsrDoc["phasic"] = edaPhasic;
gsrDoc["timestamp"] = timestamp;
DynamicJsonDocument maxDoc(256);
maxDoc["sensor"] = "MAX30102";
maxDoc["bpm"] = beatsPerMinute;
maxDoc["hrv"] = beatAvg; // SDNN calculated offline
maxDoc["timestamp"] = timestamp;
DynamicJsonDocument bhDoc(256);
bhDoc["sensor"] = "BH1750";
bhDoc["lux"] = lux;
bhDoc["timestamp"] = timestamp;
// Upload to GitHub
if(lastSHA_GSR == "") lastSHA_GSR = getCurrentSHA(gsrUrl);
uploadToGitHub(gsrDoc, gsrUrl, lastSHA_GSR);
if(lastSHA_MAX == "") lastSHA_MAX = getCurrentSHA(maxUrl);
uploadToGitHub(maxDoc, maxUrl, lastSHA_MAX);
if(lastSHA_BH == "") lastSHA_BH = getCurrentSHA(bhUrl);
uploadToGitHub(bhDoc, bhUrl, lastSHA_BH);
lastUpload = millis();
}
}
Mathematical Formulation:
Processing GSR (Galvanic Skin Response)\[ \text{EDA}_{\text{tonic}} = \text{edaMovingAverage}(\text{gsrRaw}) \]
\[ \text{EDA}_{\text{phasic}} = \text{gsrRaw} - \text{EDA}_{\text{tonic}} \]
Processing Heart Rate
\[ \Delta t = t_{\text{current}} - t_{\text{lastBeat}} \]
\[ \text{BPM} = \frac{60}{\Delta t / 1000} \]
\[ \text{beatAvg} = \frac{1}{4} \sum_{i=0}^{3} \text{rates}[i] \]
Comprehensive Explanation:
- Connection Monitoring: Checks WiFi status and reconnects if disconnected.
- Reading GSR (Galvanic Skin Response):
- Reads raw value from the analog pin (GSR_PIN).
- Calculates the tonic (baseline) component using moving average.
- Calculates the phasic component as the difference between raw and tonic (Boucsein, 2012).
- Reading Heart Rate (MAX30102):
- Reads IR value from the MAX30102 sensor.
- Detects beats with a threshold of 50000 (Shaffer et al., 2014).
- Calculates BPM from the interval between beats.
- Applies averaging filter for the last 4 readings.
- Validates BPM within the physiological range of 20-255.
- Reading Light Level (BH1750):
- Reads lux value from the BH1750 sensor (Golden et al., 2005).
- Upload Period:
- Uses a timer to upload data every 10 seconds.
- Creates JSON documents for each sensor with timestamps.
- Uploads data to the GitHub repository with SHA update mechanism.
8. GSR Component Analysis
The Galvanic Skin Response (GSR) or Electrodermal Activity (EDA) is processed according to Boucsein (2012) methodology.
Complete Mathematical Formulation:
\[ \text{EDA}_{\text{raw}}(t) = \text{analogRead}(\text{GSR\_PIN}) \]
\[ \text{EDA}_{\text{tonic}}(t) = \frac{1}{W} \sum_{i=0}^{W-1} \text{EDA}_{\text{buffer}}[i] \]
\[ \text{EDA}_{\text{phasic}}(t) = \text{EDA}_{\text{raw}}(t) - \text{EDA}_{\text{tonic}}(t) \]
Explanation:
- EDAraw: Raw EDA signal from the sensor, measuring skin conductivity.
- EDAtonic: Tonic component of EDA, representing long-term skin conductivity levels.
- EDAphasic: Phasic component of EDA, representing short-term skin conductivity responses.
- W: Window size for moving average (5 samples).
- This decomposition method aligns with Boucsein (2012) for stress and emotional response analysis.
9. Heart Rate Analysis
Heart rate is measured using the MAX30102 sensor following the methodology from Shaffer et al. (2014).
Complete Mathematical Formulation:
\[ \text{IR}(t) = \text{particleSensor.getIR()} \]
\[ \text{Beat detection} = \begin{cases} 1, & \ if \text{IR}(t) > 50000 \text{ and } \text{checkForBeat}(\text{IR}(t)) \\ 0, & \text{otherwise} \end{cases} \]
\[ \Delta t = t_{\text{current}} - t_{\text{lastBeat}} \]
\[ \text{BPM} = \frac{60 \times 1000}{\Delta t} \]
\[ \text{BPM}_{\text{valid}} = \begin{cases} \end{cases} \]
\[ \text{HRV}_{\text{avg}} = \frac{1}{4} \sum_{i=0}^{3} \text{rates}[i] \]
Explanation:
- IR(t): Infrared value read from the MAX30102 sensor.
- Beat detection: Function to detect heartbeats.
- Δt: Time interval between heartbeats (ms).
- BPM: Beats Per Minute, calculated from the interval between beats.
- BPMvalid: Validated BPM within physiological range (20-255).
- HRVavg: Heart Rate Variability, calculated as the average of the last 4 measurements.
- Threshold of 50000 for beat detection refers to Shaffer et al. (2014).
10. Light Level Analysis
Light levels are measured using the BH1750 sensor, referencing Golden et al. (2005) regarding the impact of light on mental health.
Formulation:
\[ \text{Lux}(t) = \text{lightMeter.readLightLevel()} \]
Explanation:
- Lux(t): Light level measured in lux.
- Measurement of light levels is crucial in mental health analysis based on research by Golden et al. (2005).
- Light levels correlate with circadian rhythms and melatonin production.
11. Data Upload Mechanism
Sensor data is uploaded to GitHub as a cloud storage system with SHA mechanism for version control.
Process Flow:
\[ \text{Sensor Data} \rightarrow \text{JSON Documents} \rightarrow \text{Base64 Encoding} \rightarrow \text{HTTP PUT} \rightarrow \text{GitHub} \]
Explanation:
- Upload Interval: Data is uploaded every 10 seconds (10000 ms).
- Data Format: JSON structure with sensor values and timestamps.
- SHA Mechanism: SHA values are used to ensure data integrity and conflict handling.
- Base64 Encoding: JSON data is encoded with Base64 as per GitHub API requirements.
- This system allows for longitudinal data recording for long-term mental health trend analysis.
12. Scientific References
This code implementation refers to several scientific references:
- Boucsein, W. (2012): "Electrodermal Activity" - Standard reference for processing GSR/EDA signals, including tonic and phasic component separation methods.
- Shaffer et al. (2014): "A healthy heart is not a metronome: an integrative review of the heart's anatomy and heart rate variability" - Guidelines for processing and interpreting heart rate variability data.
- Golden et al. (2005): "The efficacy of light therapy in the treatment of mood disorders: a review and meta-analysis of the evidence" - Research on the impact of light levels on mood and mental health.