The SensorLab Seed Grant program, launched in 2021, was established to catalyze innovative, interdisciplinary research by providing early-stage funding for projects that integrate sensors, data science, and emerging technologies. Since its launch in 2021, the SensorLab Seed Grant program has funded a total of 26 projects, reaching virtually every corner of the university and fostering a culture of interdisciplinary collaboration.

Since its inception, the program has supported a remarkably diverse portfolio of initiatives—spanning health, equity, engineering, and human performance. Projects have ranged from immersive simulation environments, wearable-based monitoring, and robotics, to digital phenotyping for mental health, AI-driven wound care, VR navigation, climate resilience, and surgical training in XR. 

Over the years, the program has funded dozens of pilot studies, helping investigators generate preliminary data, establish collaborations, and secure external funding. By 2025, the Seed Grants had grown into an important institutional innovation, fostering projects as varied as menstrual health prediction, perinatal depression management, sports medicine, arts, and skill assessment in surgical training—demonstrating the program’s role in advancing both scientific discovery and community impact.

2025 - Funded Seed Grants

Uncovering Dysmenorrhea Triggers: Integrating Sensor Data, Ecological Momentary Assessment, and Machine Learning

PI: Chen Xiao Chen, Li, Ao

Dysmenorrhea, or recurrent menstrual pain, affects 45%-95% of reproductive-age women, significantly impacting daily life and contributing to school and workplace absenteeism. Current treatments, including NSAIDs and hormonal contraception, often have limitations and inconsistent effectiveness. There's a lack of understanding of how daily factors like stress, sleep disturbance, and physical activity influence pain.

This proposal outlines a longitudinal pilot study that will combine continuous sensor data from a Garmin Venu 3S smartwatch (monitoring stress, sleep, and activity) with twice-daily assessments of perceived stress, sleep quality, and menstrual pain in 30 women over three cycles. Our aim is to (i) explore the relationships between these factors and pain and (ii) identify patterns that predict future dysmenorrhea episodes. This research will support the development of timely, personalized interventions to alleviate menstrual pain and enhance the overall well-being of millions of women globally.

Digital Phenotyping with Active and Passive Monitoring Approaches for Predicting and Managing Perinatal Depression Using Machine Learning

PI: Sudha Ram

This project will pioneer digital phenotyping methods to predict and manage perinatal depression by comparing three monitoring protocols—active (self-report via the mindLAMP app), passive (wearable sensors only), and a hybrid approach—within the same cohort of new and expectant mothers. Leveraging phone-based digital biomarkers (calls/text patterns, screen time, location, device motion) alongside Apple and Samsung Watch data (heart rate, accelerometry, gyroscope, ambient light, pressure), the study will build and test a complete data-ingestion pipeline, integrate these signals with routine clinical assessments via recurrent and causal machine-learning models, and establish groundwork for scalable, personalized screening tools.

Enhancing Surgical Skill Assessment Using Eye-Gaze Tracking in 2D vs 3D XR Environments

PI: Ehsan Azimi

This pilot project will develop and evaluate an eye-gaze–guided extended reality (XR) training platform for basic minimally invasive
surgery (MIS) skills, comparing traditional 2D monitor displays with immersive 3D stereoscopic views via HoloLens 2. By
instrumenting both setups with eye-tracking and motion-capture (e.g., IMUs on instruments, OptiTrack cameras), we will record
trainee performance (task time, error rates, motion smoothness), gaze metrics (fixation duration, saccade patterns, pupil dilation),
and subjective/physiological workload (NASA-TLX, heart-rate variability). We hypothesize that 3D XR will yield faster, more
accurate performance and lower cognitive load, and that gaze behavior will reflect more expert-like attentional strategies in the
stereo condition.

2024 - Funded Seed Grants

Toward a Comprehensive Model of Hispanic/Latino(x) Cardiovascular Health: Mechanistic Linkages between Social Network Dynamics and Night-time Blood Pressure Variability

PI: Flores, Melissa

The proposed research project aims to model cardiovascular health in Hispanic/Latino(x) populations by investigating the
mechanistic linkages between social network dynamics and night-time blood pressure variability (NBPV). This study integrates
resources from the University of Arizona's (UA) SensorLab, such as ambulatory blood pressure monitors and ecological
momentary assessment data collection, into ongoing collaborative research by the Health Equity Analytics Laboratory (HEAL) and
the Nosotros Comprometidos a Su Salud Research Group. Leveraging advanced SensorLab technology, our collaborative research
team intends to pursue an R01 submission through the National Institute on Minority Health Disparities (NIMHD) under the
"Research With Activities Related to Diversity (ReWARD)" program.

Advancing Diabetic Foot Ulcer Management Through Non-Invasive Multimodal Sensing and AI-Driven Predictive Modeling

PI: Kellen Chen, Mohammad S. Majdi, Jeffrey J. Rodriguez, Geoffrey Gurtner

Diabetic foot ulcers (DFUs) significantly burden patients and healthcare systems worldwide. Current care relies on intermittent
clinical assessments, often delaying timely intervention. This project aims to revolutionize DFU management by integrating
advanced sensor technology and AI. Utilizing resources from SensorLab, we will develop a non-invasive, multimodal sensing
system to monitor biomarkers like wound size, temperature, pH, and impedance. Preliminary studies at Stanford University have
shown promising results but highlight the need for standardized protocols. This research focuses on enhancing DFU monitoring,
particularly for underrepresented groups such as Hispanic and Native American populations.

Enhancing Environmental and Energy Equity: Advanced Indoor Heat Stress Sensing for Diverse Populations

PI: Jung, Wooyoung

The project aims to enhance environmental and energy equity by developing new sensing technology for assessing indoor heat stress, particularly in disadvantaged communities affected by extreme heat due to climate change. Current building codes often prioritize energy efficiency over thermal resilience, neglecting individual differences. The research seeks to understand heat stress vulnerability among populations and identify effective technologies for assessment. The team will build on a study from the Human Building Synergy (HUBS) lab, involving 19 subjects and three additional biosensors funded by the SensorLab Seed Grants. The goal is to improve building resilience during extreme heat events and better address the needs of disadvantaged communities.

Establishing Foundations for Fall Prevention

PI: Jonathan S. Lee-Confer

The project aims to develop a new research direction focused on quantifying human body movement during slip incidents and enhancing floor safety using innovative sensor technologies. Short-term goals include feasibility studies, establishing foundational research, mentoring undergraduate students, and publishing findings. Long-term objectives involve securing external funding, collaborating with grant-funded faculty, and obtaining R03 grants as a precursor to R01 applications. Studies 1 and 2 address falls in older adults, noting that reactive arm movements can reduce fall risk by 70% during slips. Researchers will analyze arm movements using SensorLab's walkway to induce unexpected slips in younger participants. The LiteGait Harness System will be utilized, and non-invasive Electromyography (EMG) will track muscle onset timing during slip initiation, while Inertial Measurement Units (IMUs) will measure deviations in trunk movement compared to EMG activation of the deltoids.

2023 - Funded Seed Grants

VR Navigation for Young Adults

PI: Arne Ekstrom

Spatial navigation is crucial for everyday functioning, but neurodegenerative conditions like Alzheimer’s and Parkinson’s diseases disrupt this ability, negatively impacting quality of life. These conditions affect the integration of visual cues, body movement, and memory that are essential for navigation. The project aims to create quantitative models of spatial navigation to explore how aging and neurodegenerative diseases influence path integration and cue combination processes. Previous studies often overlooked body movement in VR tasks, leading to a gap in understanding the computational mechanisms behind navigation deficits. The proposal intends to utilize immersive and mobile VR to address these gaps by focusing on one-dimensional navigation tasks. It is hypothesized that navigation is achieved through a combination of Path Integration and Landmark Navigation systems that interact in a hybrid manner, depending on the accuracy of each system.

Local/remote real-time sensing of transepithelial/endothelial electrical resistance (TEER) and temperature of organ-on-chip systems for high-resolution evidence-based in vitro modeling

PI: Gu, Jian

Organ-on-chip (OoC) systems are microfabricated cell culture platforms that are engineered to recapitulate key aspects of a human organ or tissue (1). They are designed to be more physiologically relevant than traditional 2D cultures and are therefore expected to have more predictive values in disease modeling, drug development, and precision medicine. With the recent FDA Modernization Act 2.0 to eliminate the requirement of drug testing on animals before human trials, we will see OoC models play a more significant role for future drug development. In this project, we plan to leverage the expertise of the SensorLab to develop the local/remote real-time TEER-Temp monitoring capability. We will use BAFUS BBBD OoC platform for the development of the technology. The purpose of the BAFUS BBBD OoC platform is to model a complex BBBD process using bubbles and focused ultrasounds (FUS) to control and monitor delivery of drugs across the BBB, which has been a challenge for assessing promising therapeutics to treat neurological disorders (Alzheimer’s, Parkinson’s, brain tumor etc.) for decades (7, 8). Currently there is still no good in vitro model available for the technology to assist the drug development, partly because the traditional in vitro cell culture setup can cause huge US energy uncertainties (up to 700%) (9). Recently, we developed a US-transparent OoC device that allows accurate US energy transfer to the in vitro cells (Patent filing in process with TLA). Our goal is to develop a predictive OoC BAFUS BBBD model that can be used by industry for drug delivery testing, where TEER-Temp sensing is a critical component of the system.

Smart Assessment of the Relationship between the Environment and Cardiorespiratory Responses.

PI: Chris Chaeha Lim

Adverse health effects of air pollution have been well documented, linking exposure to asthma incidence, symptom severity, decreased lung function, and airway inflammation (Garcia et al., 2019). Traditional studies primarily use central air monitors at residential addresses, limiting understanding of the spatial and temporal variations in pollution, especially in urban areas. While personal monitoring can provide more accurate exposure data, it is often limited by high costs and participant self-reporting, which can lead to missing data and bias. However, the rise of smartphone apps and wearable sensors presents new opportunities to collect high-quality, real-time data on air pollution and health impacts. This proposed study aims to measure real-time exposures related to asthma, allowing for more precise estimates of health effects and a better understanding of environmental risks. The findings will support future research efforts and grant applications to expand the study's scope, recruit more participants, and test interventions, such as air purifiers and dietary changes, to alleviate symptoms.

Mechanisms Driving Unhealthy Food Choices During Nocturnal Wakefulness

PI: Grandner, Michael Andrew

Nocturnal wakefulness when sleep pressure is high is linked to deficits in mood and cognition, potentially leading to maladaptive behaviors such as increased impulsivity and dysregulated mood, as proposed in the "Mind After Midnight" hypothesis. Research indicates that the risk of suicide and homicide rises between 2-4 am. We aim to explore how this phenomenon relates to cardiovascular disease (CVD) risk through circadian influences on eating behavior. Our study will involve healthy (lean) and obese young adults (n=20 total) tracking their sleep and eating habits over two weeks. Participants will undergo partial sleep restriction in the lab and complete neurocognitive assessments. Meals will be provided, and snacks can be chosen freely to assess nighttime eating behaviors. Our goal is to understand how cognitive and emotional disturbances at night interact with dysregulated feeding in both lean and obese individuals.

Meditation Analysis with Astroskin and MRI

PI: John J.B. Allen

Meditation has been extensively researched for its beneficial effects (Feruglio et al., 2021; Lomas et al., 2015; Wielgosz et al., 2019). Technological enhancements for meditation include neurofeedback (Brandmeyer & Delorme, 2013), psychedelics (Millière et al., 2018), virtual reality (Górska et al., 2020), and transcranial direct current stimulation (Badran et al., 2017). Our lab has found that transcranial focused ultrasound (tFUS) targeted at the posterior cingulate cortex can modulate default mode network connectivity to increase mindfulness (paper forthcoming). We are currently conducting two studies on tFUS and meditation: one with novice meditators to enhance their training, and another with experienced meditators to investigate how different ultrasound parameters affect results. Given that meditation can influence physiological factors like cortisol (Sudsuang et al., 1991) and blood pressure (Shi et al., 2017), we hypothesize that tFUS may also produce somatic effects. The advent of affordable body sensors (Hao et al., 2017; Rodriguez et al., 2018) allows for more research in this area. We aim to incorporate somatic measurements from the Astroskin device to further explore changes in meditators.

2022 - Funded Seed Grants

SensorPod - A template for connecting custom portable investigator sensor hardware to university data ingestion pipeline.

PI: Gutruf, Philipp

The creation of custom wearable systems is critical to translate fundamental, applied, and clinical research to enable broad impact in advancing digital medicine. Often, research groups have specialized expertise in either the analysis, sensor, or physiological field and need collaboration or an easy-to-implement template to create a practical, portable system that enables studies of interest. One element that is often the bottleneck is the implementation of data capture on the portable device and a pipeline to stream data to a centralized and well-managed storage that is compliant with privacy standards associated with clinical trials. This can be a daunting task for investigators and often a barrier for larger trials, which can only be resolved with external expertise and typically results in insular solutions that require duplication for new projects. In this project, we propose to create a broadly applicable template for a portable platform that can capture custom sensor data and relay it into the already existing university data ingestion pipeline to rapidly test custom sensor solutions in new clinical paradigms and new treatment schemes.

Optimizing sports performance and injury prevention through sensor technology.

PI: Dr. Mark Sakr, Dr. Shravan Aras, Dr. John Konhilas

In Athletic Medicine, optimizing sports performance is crucial, especially with the rise of wearable technologies that provide vast data streams for athletes, trainers, and sports physicians. There’s a growing need for systems that validate the correlation between these data and physiological measures. Heart-rate-derived indices, sleep quality, and athlete load are emerging as key factors in predicting performance, injury, and illness among elite athletes, yet there’s limited understanding of recovery quantification using these technologies, particularly in team sports.

The autonomic nervous system, which manages stress responses, can be assessed through heart rate variability (HRV). Monitoring HRV fluctuations may predict training adaptation and performance, with reduced variability often linked to fatigue and poor training responses. Sleep is another critical recovery factor; insufficient sleep duration can negatively affect athletic performance by impairing cognitive and metabolic functions. Research has shown that during intense training seasons—like collegiate football—sleep quality significantly influences recovery and muscle growth.

Improving CO2 as a Respiratory Viral Transmission Risk Indicator.

PI: Amanda Wilson

Microbial indoor air quality (IAQ) has gained importance during the COVID-19 pandemic, as SARS-CoV-2 and other respiratory viruses can spread through virus-laden aerosols. Measuring pathogens directly is expensive and time-consuming, so CO2 concentration is used as a real-time indicator of virus transmission risk, influenced by fresh air ventilation and occupancy levels. Higher CO2 levels indicate less fresh air or higher occupancy. Studies in hospitals show a significant link between CO2 and airborne aerobic colony count (ACC). However, data on CO2 as an indicator in non-healthcare settings (like classrooms and offices) is limited and varies with human behavior.

To improve CO2 as a reliable risk predictor for respiratory viral disease, I plan to explore the relationship between CO2 concentration and human behaviors such as entrance/exit patterns and overall occupancy. The research will involve three short-term milestones (STMs):

1. Quantify the relationship between door openings/closings and CO2 levels in five public environments (classrooms, offices, elevators, cafeterias, buses/trams) in Tucson, AZ.
2. Examine how human behavior (foot traffic) impacts CO2 concentration variability in the same environments.
3. Compare respiratory viral transmission risks across these environments using CO2 as a model input.

Development of pressure device for carpal arch space augmentation.

PI: Li, Zong-Ming

During an investigation of the transverse carpal ligament (R03AR054510, 2007-2010), we identified a novel biomechanical mechanism, carpal arch space augmentation (CASA), for decompression of the median nerve in carpal tunnel syndrome (CTS). This approach demonstrates that CTS relief can be achieved through radioulnar wrist compression, which raises the carpal arch's height and alleviates pressure on the median nerve, contrasting with existing therapies that focus on widening the carpal tunnel.

Our findings have been supported by exploratory laboratory studies (R21AR062753) and a completed project (R01AR068278), which showed the efficacy of the CASA mechanism through various research methods, including geometric modeling and human studies. Notably, a feasibility clinical study at the Cleveland Clinic revealed promising outcomes, such as improved median nerve conduction and reduced CTS symptoms like pain and numbness. The CASA concept holds significant potential for developing a non-surgical biomechanical treatment for CTS, positioning us to advance its clinical translation.

Integration of LiDAR mapping, machine learning, and haptic meta-surface to develop assistive devices for the visually impaired and Alzheimer’s population.

PI: Kavan Hazeli

The proposed project aims to develop a life-assisting device to improve the quality of life for the visually impaired and individuals with cognitive impairments like dementia and Alzheimer's. Utilizing cutting-edge technology, including LiDAR, machine learning, advanced haptic navigation, and wearable sensors (iMagine), the device will create a real-time haptic 3D map complemented by auditory feedback. Users will receive descriptive audio information about objects in their environment, enhancing navigation, reducing anxiety, and promoting independence.

Specific Aims: This project will provide preliminary data for an NSF Convergence Accelerator submission. The main challenge is converting 4D/3D LiDAR data into sensory stimulation as haptic and acoustic feedback while considering psychoacoustics and psychovisual aspects. For the duration of the Seed grant, we will focus on two aims:

1. Converting LiDAR height map values to electromagnetic forces through a data conversion algorithm, assessing its feasibility.
  
2. Designing and constructing a haptic metasurface using an array of electromagnets to create height maps by manipulating a pin grid array to reflect the electromagnetic waves accurately.

Tongue-based sensor system to control robotic arm in high-level paralysis.

PI: Fuglevand, Andrew Joseph

Individuals who have sustained spinal cord injuries at cervical levels 4 and above often are left with paralysis affecting the entire upper and lower limbs. These high-level tetraplegics require assistance with activities of daily living, such as eating, dressing, bathing, toileting, opening doors, and getting into and out of bed. Recent advances in assistive robotic arms attached to wheelchairs has greatly enhanced the ability of tetraplegics with lower-level lesions to perform some of these tasks. Because the control of these robotic arms requires the hand to manipulate a joystick, they are unavailable for use by most high-level tetraplegics. This obstacle could be partially overcome if there were alternate means by which a user could convey desired actions to a robotic device. High-level tetraplegics, however, retain the ability to move the tongue. It seems reasonable to hypothesize, therefore, that signals derived from the tongue could be used to control the actions of a robotic arm. At present, however, there are few sensors available that could readily detect actions performed by the tongue that are also unobtrusive, relatively inexpensive, and intuitive to operate. The main goal of this project, therefore, is to develop such a tongue-based sensor system, in collaboration with the University of Arizona SensorLab, to control movements of a robotic arm accurately and seamlessly. Ultimately, this system could enable individuals with high-level paralysis to interact with their environment in complex ways and greatly enhance their independence, health, and sense of well-being.

2021 - Funded Seed Grants

Intelligent Simulation Environment (ISE): Integrating Sensors, Mixed Reality (XR) and Artificial Intelligence (AI) to Escalate and Validate Complex Adaptive Competencies (CAC)

PI: Janine E. Hinton

The proposed Intelligent Simulation Environment (ISE) prototype will integrate UA SensorLab resources (e.g., eye tracking, motion tracking, heart-rate variability, electrodermal activity) with mixed reality (XR), and artificial intelligence (AI) applications to promote complex adaptive competencies (CAC) learners need to meet 4 th industrial revolution challenges. CACs involve successful and efficient implementation of evidence-based solutions resulting from individual or team augmented decision making. This decision-making requires accurate situation awareness (SA), seeking, synthesizing, and adapting information from multiple sources in dynamic technology rich contexts. Additionally, the ISE will promote trapping error generating factors and mitigation strategies when errors occur. The principal project aim is to build UA SensorLab capacity and create an ISE prototype to facilitate future development of portable high-stakes domain specific XR simulations to assess and coach learner CACs. This project involves new collaborations between UA CON simulation, curriculum and technology specialists, Systems Engineering and Center for Digital Humanities (CDH), and SensorLab experts. The ISE aligns with the UAHS strategic plan by helping to prepare learners for the future and optimizing interactions between humans and technology to solve complex problems.

Exploring Applications of Consumer Wearable Devices for Movement Characterization

PI: Kristen Renner

Total knee arthroplasty (TKA) is a common procedure for managing severe osteoarthritis, yet up to 20% of patients report dissatisfaction and experience reduced mobility compared to healthy peers. Current approaches to predicting TKA outcomes often rely on survey data and demographics, with limited incorporation of biomechanical assessments due to the constraints of lab-based equipment. This project investigates the potential of wearable devices—specifically consumer-grade (Fitbit Charge 5, Moov Now) and research-grade (ActiGraph GT9X Link, Verisense+)—to provide scalable, real-world data on movement function. Using motion capture technology as the gold standard, the study aims to (1) build a dataset from wearable and motion capture recordings of daily activities, (2) develop data processing pipelines to extract key variables, and (3) evaluate relationships between wearable-derived metrics and biomechanical outcomes. The resulting dataset, processing codes, and protocols will expand SensorLab’s capabilities, facilitate future research, and support extramural funding proposals aimed at improving predictive models of post-TKA outcomes.

IMMERSION: IMmersive MixEd-Reality SimulatION for Difficult Airway Management

PI: Kate E. Hughes

Tracheal intubation is a complex, yet common, medical procedure in critically ill patients that carries complication rates of greater than 50% and a cardiac arrest rate between 1-4%.(1) This risk increases dramatically when more than one attempt is required, with nearly all patients having one adverse event by the third attempt.(2) The advent of devices such as video laryngoscopes (VL) have ameliorated anatomic obstacles (1) by eliminating the requirement for a direct line-of-sight,(3-5) have steepened the learning curve for skill acquisition, and have reduced failed intubations.(1,6,7) The residual risk of harm to patients results largely from failure to identify and prepare for potential difficulty or recognize the need to move on from a failed intubation plan.(8-10) Such threats to patient safety render medical simulation necessary for reducing this risk. While current high-fidelity simulation capabilities provide anatomically accurate representations of various airway configurations for laryngoscopy,(11,12) they are limited in their ability to provide immersive scenarios that induce stress in learners for cognitive tasks. Existing simulation methods do not offer the complex dynamic scenarios required for injecting cognitive stress. Adding physiological disturbances to the simulated environment induces stress for the learner and increases the cognitive load. For example, when faced with a rapidly desaturating patient, the learner must decide to either continue laryngoscopy, or abort the attempt to reoxygenate the patient, leaving them further away from completing the “task”. Realistically simulating these stressful situations is increasingly difficult as VL allows for easy escape from the stress by overcoming the anatomic challenges used to induce difficulty in high-fidelity simulation manikins, which reinforces the learned behavior for persisting when faced with difficult clinical situations. Learners are underprepared for this cognitive load in clinical practice, and researchers are unable to isolate what combination of factors influence the “persist vs abort” decision. The inability to simulate real-world environmental stress highlights the current fundamental limitations of current simulation capabilities. These limitations diminish the ability to conduct research on, or train for, cognitive tasks, stress inoculation, or improve patient safety through simulating the nuances of airway management.

Feasibility study of multimodal sensing and haptic feedback in robotic surgery

PI: Philipp Gutruf

Laparoscopic surgery, also known as minimally invasive surgery (MIS) was introduced in the late 1980s and has since been integrated into modern day surgical practice [1]. Its advantages are widely acknowledged, with benefits of minimizing blood loss, post-operative pain and reducing recovery time. However, this surgical technique is more challenging than conventional, open surgery with a steeper learning curve. A significant advancement in evolution of MIS was the development of a clinical, FDA approved robotic platform in the late 1990s using a master-slave type of robotic system [2]. Subsequent new generations of surgical robots overcome several limitations of laparoscopic surgery. For example, 3D vision is facilitated through stereo endoscopy to minimize limitation of depth perception issues caused by 2D visualization in conventional laparoscopy. Also, the articulated movements of robotic arms with 7 degrees of freedom have expanded the reconstructive ability to match that of the human hand. Although robotic surgery platforms resolve and mitigate the limitations of laparoscopic surgery, the predominant robotic systems (e.g., da Vinci, Intuitive Surgical, Inc.) do not provide any haptic and tactile feedback. Therefore, surgeons are purely dependent on visual cues while performing an operation. To the surgeon at the console, clamping on bone, bowel or soft tissue feels the same. They must rely currently on visual cues of tissue deformation to assess the force on the tissues, and avoid avulsing small blood vessels, or causing inadvertent damage to adjacent structures such as bowel. We strongly feel that the integration of tactile sensation with other sensing modalities in robotic surgery would significantly improve the surgeon’s skills acquisition [3], [4], improve intra-operative experience [5], and provide an additional layer of quality assurance and patient safety [5], [6].

WEARABLE SENSORS AND VIRTUAL REALITY FOR TAI CHI AND QIGONG INTERVENTION RESEARCH

PI: Chen, Zhao

Tai Chi and Qigong (TCQ) are mindfulness-based exercises with demonstrated benefits for stress reduction, sleep quality, balance, cardiovascular health, pain management, and cognitive function during aging. Despite promising evidence, large-scale and technology-driven applications of TCQ remain limited. Our team has developed in-person and virtual TCQ interventions and is currently investigating their effectiveness in improving the health of older workers. A prior NIH grant submission (VITALITY Study) proposed a virtual TCQ intervention for healthcare workers, but reviewers recommended incorporating wearable sensors to objectively capture outcomes such as stress and sleep. Research using wearable technologies to evaluate TCQ effects is scarce, and direct comparisons across devices are lacking. In addition, while online delivery improves accessibility, two-dimensional platforms limit the ability to fully engage with TCQ’s three-dimensional movements. Virtual reality (VR) offers a novel solution for immersive TCQ delivery, with the potential to enhance feasibility and enjoyment. To strengthen resubmission of our R01 application and expand the research program on technology-enabled mindfulness interventions, this seed project will evaluate wearable sensors for monitoring TCQ outcomes and explore VR as a delivery platform, generating critical preliminary data for future large-scale studies.

StellarScape – Immersive Multimedia Show

PI: Yuanyuan Kay He

StellarScape is an immersive multimedia piece synthesizing music, science, visual art, and technology. The performance includes live musicians, electronic music, and dancers, collaborating with interactive cinematography—fusing kinesthetic and acoustic sensing with cosmic simulation, in real time. StellarScape has an astronomical context. It is the story of a massive star, from its birth to its death, echoing a primordial theme of darkness and light. Stars are born in the murk of cold molecular clouds. They burst into life as fusion starts, then forge elements in their nuclear furnace cores. Massive stars die in cataclysmic explosions, casting heavy elements into space and leaving behind a heart of darkness: a black hole. In this creative performance piece, stars become metaphors for growth and regeneration. StellarScape is the story of us. We are in the universe and the universe is in us. We are stardust brought to life.

Development of an open-source dashboard for team communication experiments

PI: Adarsh Pyarelal 

Good teamwork processes enable teams to perform beyond the sum of their parts. Practices such as closed loop communication (CLC) have been demonstrated to improve team outcomes, especially in complex, fast-paced, and high stakes environments such as operating theaters. Given the potentially catastrophic consequences of poor team communication in such environments, there is a lot to be gained by detecting and repairing poor team communication practices early. We hypothesize that we can leverage state of the art AI technologies for speech and natural language processing to automate this process. We propose to develop software infrastructure to support testing this hypothesis, in the form of a dashboard for human-AI teaming experiments involving spoken natural language team communication. The proposed work builds upon and is complementary to two ongoing research efforts. The first is the DARPA-funded Theory of Mind-based Cognitive Architecture for Teams (ToMCAT) project, which has developed a system for analysing spoken team communication. The second is the recently funded (via a SensorLab Student Research Grant) project, Automated real-time detection of closed-loop communication in spoken dialogue, which aims to augment the ToMCAT system with a CLC detection software component for detecting presence or lack of closed loop communication in real time, and a prototype wearable audio streaming device (WASD) that streams audio from individual team members in an ambulatory collaborative task setting to a central processing server.

Preliminary Development of a Virtual Reality Neuropsychological Assessment (VRNA)

PIs: William D. S. Killgore, Janet M. Roveda

During the past 20 years, there have been more than 414,000 traumatic brain injuries (TBIs) reported among military personnel, with the vast majority (~83%) categorized as mild (mTBI). Cognitive deficits from mTBI often include problems with attention, memory, processing speed, and executive functioning, which can affect complex cognitive abilities. It is therefore of critical importance to provide military and civilian medical personnel with advanced technologies to accurately determine the extent of injuries and provide automated decision guidance regarding return-to-duty, temporary rest, or evacuation. The gold standard for determining cognitive deficits from an injury is a traditional clinician-administered neuropsychological assessment battery. However, such assessments are lengthy, resource-intensive, and impractical for far-forward military settings. There is a critical need for portable, automated, and ecologically valid tools that use advanced sensor technology and machine learning to rapidly assess neurocognitive deficits due to brain injury in austere environments. We propose to develop and validate an innovative approach to neuropsychological assessment that combines advanced sensor technology with deep neural network (DNN) learning to assess multiple domains of real-time cognitive performance using a brief simulation on a portable VR device. This approach is time-efficient, ecologically valid, and capable of providing personalized neurocognitive assessment in only a few minutes. The resulting real-time data stream will be analyzed using DNN AI/machine learning algorithms to extract multiple dimensions of cognitive performance simultaneously. The goal of this Seed Grant is to conduct the preliminary development necessary to demonstrate proof-of-concept for a portable neuropsychological assessment system, enabling future submission to the Department of Defense for large-scale funding to fully develop the Virtual Reality Neuropsychological Assessment (VRNA) system.