Abstract
Measures of human movement dynamics can predict outcomes like injury risk or musculoskeletal disease progression. However, these measures are rarely quantified in clinical practice due to the prohibitive cost, time, and expertise required. Here we present and validate OpenCap, an open-source platform for computing movement dynamics using videos captured from smartphones. OpenCap’s web application enables users to collect synchronous videos and visualize movement data that is automatically processed in the cloud, thereby eliminating the need for specialized hardware, software, and expertise. We show that OpenCap accurately predicts dynamic measures, like muscle activations, joint loads, and joint moments, which can be used to screen for disease risk, evaluate intervention efficacy, assess between-group movement differences, and inform rehabilitation decisions. Additionally, we demonstrate OpenCap’s practical utility through a 100-subject field study, where a clinician using OpenCap estimated movement dynamics 25 times faster than a laboratory-based approach at less than 1% of the cost. By democratizing access to human movement analysis, OpenCap can accelerate the incorporation of biomechanical metrics into large-scale research studies, clinical trials, and clinical practice.
Introduction
Evaluating the dynamics of human movement is important for understanding and managing musculoskeletal and neuromuscular diseases. For example, the loading of osteoarthritic joints predicts osteoarthritis progression1, the distribution of moments generated by muscles about lower-extremity joints when rising from a chair relates to falling in older adults2–4, and the between-limb asymmetry of muscle and ground reaction forces while performing demanding tasks relates to functional outcomes after joint surgery5–7. Despite their utility, metrics of movement dynamics are rarely measured in clinical practice. Instead, visual movement evaluations or general functional tests that require basic instruments, like a stopwatch or goniometer, are used to inform clinical decisions and as outcomes for clinical trials.
The quantitative analysis of movement dynamics can provide deeper and more reproducible insights than visual evaluations and simple functional tests; however, this analysis is resource intensive, which has impeded its use in large-scale studies and clinical practice. Traditionally, motion analysis requires a fixed lab space with more than $150,000 of equipment (Figure 1, top row). Kinematics (e.g., joint angles) are measured with a marker-based motion capture system that uses eight or more specialized cameras to capture the three-dimensional (3D) trajectories of markers placed on a subject. Joint-level kinetics (e.g., joint moments and powers) can be estimated with the additional measurement of ground reaction forces from force plates mounted beneath the floor. Musculoskeletal modeling and simulation tools8–10 combine measures of kinematics, kinetics, and muscle activation from electromyography to enable deeper investigations of motor control and musculoskeletal loading (e.g., muscle coordination and joint forces). This comprehensive analysis of movement is infrequently used outside of small-scale research studies because collecting data on a single participant, processing it, and, optionally, generating dynamic musculoskeletal simulations typically takes several days for a trained expert.
(Top row) Marker-based movement analysis usually occurs in a motion capture laboratory, and a comprehensive study of musculoskeletal dynamics typically requires more than two days of an expert’s time and equipment worth more than $150,000. (Bottom row) OpenCap enables the study of musculoskeletal dynamics in less than 10 minutes of hands-on time and with equipment worth less than $700 (assuming users need to purchase new mobile devices). OpenCap can be used anywhere with internet access and requires a minimum of two iOS devices (e.g., iPhones or iPads). (Right panel) OpenCap enables the estimation of kinematic, kinetic, and musculotendon parameters, many of which were previously only accessible using marker-based motion capture and force plate analysis.
Studies of movement dynamics with hundreds of participants have elucidated biomechanical markers that predict injury risk or surgical outcomes11–13. However, studies of this scale are expensive and rare—the median number of subjects included in biomechanics studies is between 12 and 2114,15. There is a need for inexpensive, scalable, and accurate tools for estimating movement dynamics on orders of magnitude more individuals in their natural environments. Modern data science techniques could then leverage these large datasets to explore the role of movement in health and disease, facilitating the identification and clinical translation of quantitative movement biomarkers.
Mobile tools for estimating kinematics have been developed, but most are still too expensive and time consuming for large-scale research studies and clinical translation, and none enable full-body analysis of movement dynamics. Inertial measurement units, the most widely used of these tools, can accurately estimate kinematics16, but commercially available sensors remain expensive, time-consuming to don and doff, and utilize proprietary algorithms. Recent advances in physics-based simulation of the musculoskeletal system have estimated kinetics from inertial-measurement-unit-based motion capture17,18, but these algorithms are not publicly available and have not been translated beyond small-scale feasibility studies.
Measuring kinematics with video cameras is another promising approach made possible by recent advancements in human pose estimation algorithms19. Open-source, two-dimensional (2D) pose estimation algorithms (e.g., OpenPose20) have enabled 2D kinematic analyses21 and can generate inputs for machine learning models that predict kinematic and kinetic measures22,23. While these machine learning models are useful for specific applications, they may not generalize to other measures, tasks, and populations not represented in their training data. Another potentially more generalizable approach is to triangulate the body keypoints (e.g., joint centers) identified by pose estimation algorithms on multiple videos24–29 and track these 3D positions with a musculoskeletal model and physics-based simulation. However, the sparse set of 3D keypoints identified by these algorithms does not fully characterize the translations and rotations of all body segments; thus, it is unclear whether these keypoints are expressive and accurate enough to inform movement research. Commercial markerless motion capture systems accurately estimate kinematics30, but they typically require many wired cameras, proprietary software, and specialized computing resources. The ubiquity of smartphone cameras could enable video-based motion capture without the need to purchase specialized equipment, but it is unclear whether kinematics can be accurately estimated from a small number of devices that lack hardware synchronization. If the challenges of computing accurate kinematics and kinetics from smartphone video can be addressed, smartphone-based analysis of musculoskeletal dynamics has the potential to overcome the translational barriers faced by current movement analysis technologies.
Here we introduce OpenCap, open-source, web-based software that is freely available to the research community for estimating the 3D kinematics and kinetics of human movement from videos captured with two or more smartphones (Figure 1, bottom row). OpenCap brings together decades of advances in computer vision and musculoskeletal simulation to make the analysis of movement dynamics available without specialized hardware, software, or expertise. We first validate kinematic and kinetic measures estimated with OpenCap against gold standard measures computed with marker-based motion capture and force plates. Next, we explore whether OpenCap estimates kinetic measures with sufficient accuracy to be used for disease risk screening, evaluating intervention efficacy, studying between-group movement differences, and tracking rehabilitation progress. After validating these measures in the laboratory, we highlight how OpenCap enables clinicians to measure kinetics of large cohorts in real-world settings.
Results
Setting up a data collection with OpenCap takes under five minutes and requires two iOS devices (iPhone, iPad, or iPod), two tripods, a calibration checkerboard (printed with a standard printer), and another device to run OpenCap’s web application (e.g., a laptop). After pairing the iOS devices to the web application, users are guided through camera calibration, data collection, and visualization of 3D kinematics. Kinematics are estimated from video using deep learning models and inverse kinematics in OpenSim8,10, and kinetics are estimated using a physics-based musculoskeletal simulation approach (Figure 2). OpenCap leverages cloud computing for data processing using a scalable server architecture.
To collect data, users open an application on two or more iOS devices and pair them with the OpenCap web application. The web application enables users to record videos simultaneously on the iOS devices and to visualize the resulting 3-dimensional (3D) kinematics. In the cloud, 2D keypoints are extracted from multi-view videos using open-source pose estimation algorithms. The videos are time synchronized using cross-correlations of keypoint velocities, and 3D keypoints are computed by triangulating these synchronized 2D keypoints. These 3D keypoints are converted into a more comprehensive 3D anatomical marker set using a recurrent neural network (LSTM) trained on a large motion capture dataset. 3D kinematics are then computed from marker trajectories using inverse kinematics and a musculoskeletal model with biomechanical constraints. Finally, kinetic measures are estimated using muscle-driven dynamic simulations that track 3D kinematics.
We validated OpenCap using two iPhones against marker-based motion capture and force plate analysis in a cohort of ten healthy individuals for several activities (walking, squatting, rising from a chair, and drop jumps). OpenCap estimated joint angles with a mean absolute error (MAE) of 4.5°, ground reaction forces with an MAE of 6.2% bodyweight, and joint moments with an MAE of 1.2% bodyweight*height (additional validation in Table 1 in Methods; Methods: Validation; and Tables S1–S4 and Figures S1–S12 in Supplementary Information).
Errors for each activity were averaged over trials and participants (n=10), and the reported mean is an average over activities and degrees of freedom (six for pelvis position and orientation [kinematics only], three for the lumbar, three per hip, one per knee, and two per ankle). Forces are expressed in percent bodyweight (BW) and moments in percent BW times height (ht). Kinematic and joint moment errors are presented as the mean and range over the degrees of freedom, and kinetic errors are additionally presented as the MAE as a percentage of the range. Root mean squared error in kinematics and kinetics are available in Tables S2–S4 in Supplementary Information. Average kinematic, ground reaction force, and joint moment waveforms estimated using OpenCap and Mocap are presented in Figures S1–S12 of Supplementary Information.
We then explored whether OpenCap is sufficiently accurate to estimate measures of joint loading that could be used to screen for individuals at risk of rapid progression of medial knee osteoarthritis and to evaluate the efficacy of a non-surgical intervention. We first evaluated how accurately OpenCap estimates the early-stance peak knee adduction moment, which predicts rapid progression of medial knee osteoarthritis1. The ten healthy individuals walked naturally (i.e., with a self-selected strategy) and with a trunk sway gait modification that typically reduces the knee adduction moment31. OpenCap predicted the early-stance peak knee adduction moment with r2=0.80 (r: Pearson correlation coefficient) and an MAE of 0.30% bodyweight*height compared to marker-based motion capture and force plates (Figure 3). This error is smaller than a range of thresholds for detecting knee osteoarthritis symptoms and progression (0.5–2.2% bodyweight*height1,32–34). We then evaluated whether OpenCap could estimate changes induced by the trunk sway modification in the peak knee adduction moment as well as the peak medial contact force, which is a more comprehensive loading metric that is often targeted by knee osteoarthritis interventions35,36. At the group level, OpenCap captured expected reductions in the early-stance peak knee adduction moment and peak medial contact force from the trunk sway gait modification (16–33% reductions, P<.006; t test and Wilcoxon signed rank test, n=10, Figure 3b). Significant changes in the same direction were also detected with motion capture and force plates (21–46% reductions, P<.016; t tests, n=10); further details about these statistical tests can be found in Table S5 in Supplementary Information. For this sample size, OpenCap had a 92% chance (post-hoc power averaged across tests) of detecting these expected group differences at the significance level alpha=.05, compared to the 77% chance from motion capture and force plates. At the individual level, OpenCap correctly predicted the directional change in both peak loading measures (decrease for nine individuals and increase for one individual) induced by trunk sway. OpenCap’s ability to accurately estimate knee loading and changes in loading during walking suggests that it could be used to identify individuals with medial knee osteoarthritis who may be at risk of rapid disease progression and to evaluate the effect of a gait modification on individual and group levels37,38.
We evaluated how accurately OpenCap estimates the knee adduction moment (KAM), a measure of medial knee loading that predicts knee osteoarthritis progression, and how knee loading changes with a modified walking pattern. Participants (n=10) walked naturally and with a trunk sway gait modification. a, OpenCap estimated the early-stance peak KAM with r2=0.80, compared to an analysis using marker-based motion capture and force plates (Mocap). The KAM is normalized by bodyweight (BW) and height (ht). b, The mean (bar) and standard deviation (error bar) across participants (open circles) are shown for the changes in the peak KAM and peak medial contact force (MCF), which is a more comprehensive measure of medial knee loading, from natural to trunk sway walking (*P<.05). OpenCap detected the reductions in peak KAM and MCF (P<.006, t test and Wilcoxon signed rank test) that were measured with Mocap (P<.016, t tests).
We then explored whether OpenCap is useful for studying differences in movement dynamics that commonly exist between young and older adults. Strategies for rising from a chair vary with age and are associated with different muscle force requirements2. Older adults often use a rising strategy with increased trunk flexion, which shifts the muscular demand from the knee extensors to the hip extensors and ankle plantarflexors39; this strategy is associated with low functional muscle strength4, which relates to fall risk3. We simulated differences in rising strategies between age groups by instructing ten healthy individuals to rise from a chair five times naturally, then five times with increased trunk flexion (Figure 4). At the group level, OpenCap estimated the expected reduction in the knee extension moment (P=.024, t test, n=10) and increase in the hip extension (P=.020, t test, n=10) and ankle plantarflexion moments (P=.004, t test, n=10), averaged over the rising phase, from the natural to the increased trunk flexion condition. The direction of these changes matched what was measured with motion capture and force plates (P=.002–.003, t tests, n=10); further details about these statistical tests can be found in Table S6 in Supplementary Information. For this sample size, OpenCap had a 65% chance (post-hoc power averaged across tests) of detecting these expected between-condition differences at the significance level alpha=.05, compared to the 89% chance from motion capture and force plates. OpenCap also predicted the peak knee extension moment with r2=0.65 compared to marker-based motion capture and force plates. Together, these findings suggest that OpenCap can be used to study differences in movement dynamics between young and older adults and can identify individuals with low knee extensor strength who may benefit from muscle strengthening interventions2.
To evaluate OpenCap’s ability to detect between-group differences in dynamics, we computed differences in lower-extremity joint moments while rising from a chair that commonly exist between young and older adults. Individuals (n=10) stood naturally and with increased trunk flexion, a strategy used by individuals with knee extensor weakness that shifts muscle demand to the hip extensors and ankle plantarflexors. a, The mean (bar) and standard deviation (error bar) across participants (open circles) are shown for the changes in knee extension, hip extension, and ankle plantarflexion moments, averaged over the rising phase, from the natural to trunk flexion condition (*P<.05). OpenCap identified the changes in joint moments (P=.004–.024, t tests) that were identified with motion capture and force plates (Mocap, P=.001–.002, t tests). b, The rising-phase-averaged knee extension moment values for each participant and condition are shown. OpenCap estimated the knee extension moment with r2=0.65 compared to simulations that used motion capture and force plate data as input (Mocap).
Finally, we explored whether OpenCap can accurately estimate measures of muscle force associated with rehabilitation progress. Restoring between-limb symmetry in knee extensor muscle force generation is often a goal of rehabilitation following knee surgeries, and identifying persistent asymmetry prior to rehabilitation discharge can prevent poor functional outcomes5,6,40. To simulate post-surgical asymmetries, we instructed the ten healthy individuals to perform five squats naturally, then asymmetrically by reducing the force under their left foot (Figure 5). Since muscle activation can be measured more directly than muscle force, we compared vasti (knee extensor) muscle activation measured with electromyography to activation estimated with OpenCap. We defined ground truth activation asymmetry using electromyography and a clinically relevant symmetry index threshold41 of 1.15. OpenCap classified squats as being symmetric or asymmetric with an area under the receiver operator characteristic curve (AUC) of 0.83 and an accuracy of 75% at the optimal symmetry index threshold of 1.13 (Figure 5), which was similar to the performance of simulations that used motion capture and force plate data (AUC=0.82, accuracy=70%).
To assess the utility of OpenCap for informing rehabilitation decisions, we sought to identify between-limb asymmetries in knee extensor muscle (vasti) function that indicate incomplete rehabilitation and relate to poor post-surgical functional outcomes. Participants (n=10) performed squats naturally, then asymmetrically, where they were instructed to reduce the force under the left foot. a, b, The mean (line) and standard deviation (shading) across participants are shown for the vasti muscle activation of the left (unweighted) leg measured with electromyography (EMG) and estimated using OpenCap. Muscle activations are normalized by the maximum value for each participant and measurement modality. c, OpenCap identified peak vasti activation asymmetry between the left and right leg (asymmetry defined from EMG and clinically relevant symmetry threshold), with area under the receiver operator characteristic curve (AUC) of 0.83 and accuracy of 75%. This was similar to the performance of simulations that used marker-based motion capture and force plate data as input (Mocap sim., AUC=0.82, accuracy=70%).
To demonstrate OpenCap’s utility in real-world conditions, we extended this analysis of rehabilitation tracking to a field study. A clinician, who was not an expert in movement analysis, used OpenCap to evaluate knee extension moment symmetry in 100 individuals performing natural and asymmetric squats in the community. On average, set up and data collection took five minutes per participant, and for a single squat, kinematics and kinetics were computed automatically in two and 35 minutes on a single server, respectively. In total, data collection took eight hours for 100 subjects, and computation took 31 hours on a 32-thread CPU (kinetic computation was parallelized). OpenCap’s peak knee extension moment estimates could discriminate between the symmetric and asymmetric conditions with AUC=0.90 and accuracy=85% at the optimal symmetry index threshold of 1.33 when using the condition instruction (i.e., natural or asymmetric) as ground truth (Figure 6a,b). OpenCap also detected within-subject improvements in peak knee extension moment symmetry from the asymmetric to the natural condition with AUC=0.93 and accuracy=89% at the optimal threshold of 0.26 (Figure 6c,d). Together, our lab and field studies demonstrate that OpenCap can detect asymmetries in vasti force generation that may be useful for guiding rehabilitation decisions and can track improvements in symmetry expected to occur over the course of rehabilitation.
To demonstrate the practical utility of OpenCap for tracking rehabilitation progress, we enrolled 100 participants in a clinician-led field experiment. Participants performed symmetric squats and asymmetric squats, where they were instructed to reduce the force under the left foot, which likely resulted in an asymmetry between the left and right knee extension moments. We first evaluated the utility of OpenCap as a screening tool to detect peak knee extension moment asymmetries. a, The distributions of knee extension moment symmetry indices for both squat conditions are shown, with a symmetry index larger than one indicating a lower peak knee extension moment for the left (unweighted) leg compared to the right leg. b, OpenCap’s symmetry index estimates classified between natural and asymmetric squats with an area under the receiver operator characteristic curve (AUC) of 0.90 and accuracy of 85%. We then evaluated the utility of OpenCap for detecting changes in peak knee extension moment symmetry that would be expected to occur over time during rehabilitation. c, The distributions of the average difference in the symmetry index between the asymmetric and natural conditions (i.e., hypothetical improved symmetry over time; red) and the average difference in the symmetry index between the three trials in the asymmetric condition (i.e., hypothetical unchanged symmetry over time; gray) are shown. d, OpenCap detected improvements in symmetry from the asymmetric to the natural condition with AUC=0.93 (improved compared to unchanged distributions from c) and accuracy=89%.
Discussion
This study describes OpenCap, a platform that combines computer vision and musculoskeletal simulation to quantify human movement dynamics from smartphone videos. We showed that OpenCap is sufficiently accurate for several research and clinical applications. OpenCap estimated changes in dynamic measures between conditions with similar statistical power (0.65–0.92) as the gold standard technique (0.77–0.89), it estimated dynamic measures that predict adverse outcomes related to osteoarthritis and fall risk with r2=0.65–0.80, and it estimated dynamic measures that can inform rehabilitation decision making with classification accuracies of 75–89%. Our field study demonstrated how OpenCap enables clinicians and researchers alike to analyze movement dynamics in the field and in large populations.
OpenCap reduces the cost, time, and expertise barriers to analyzing movement dynamics. OpenCap’s hardware can be acquired for between $40 and $700, depending on whether users need to purchase new iOS devices (Figure 1). This is about 215 times cheaper than traditional motion capture laboratory equipment, and it does not require a dedicated laboratory space. Hands-on time for measuring movement dynamics with OpenCap is the time for setting up the mobile devices and the time to perform the movements (Figure 1, bottom row). During our in-field data collection, the five minutes of hands-on time per participant was about 25 times less than a comparable analysis in a motion capture laboratory (about 2 hours). OpenCap does not require specialized software or expertise, which bridges gaps between the computer vision, biomechanics, and clinical movement science communities. Most computer vision algorithms require computer science knowledge to run and most simulation tools require biomechanics knowledge to operate, but OpenCap automates these processes, making advancements in these fields more accessible to clinicians and researchers. OpenCap also meets Stanford University’s security requirements for cloud-based systems using high-risk data (e.g., protected health information) and ensures end-to-end data encryption. To further facilitate ease of use, we provide tutorials and examples on a companion website (opencap.ai).
Our results demonstrate OpenCap’s potential clinical utility as a screening tool and for informing rehabilitation decisions. Future studies could test OpenCap’s ability to screen for risk of non-contact ligament injury in athletes11,42, or to predict efficacy of surgery in individuals with cerebral palsy12,13. OpenCap assessments may also be fast enough to enable movement screens to become part of routine clinical care, allowing clinicians to track function over time, and following an injury or surgery, to benchmark rehabilitation status against pre-injury measures43.
By enabling large-scale, out-of-lab studies, OpenCap can accelerate movement research. OpenCap detected between-condition differences with similar statistical power as motion capture and force plate analysis, but in substantially less time. This accuracy and efficiency makes prospective injury risk studies that require hundreds of participants more feasible, enables the incorporation of movement dynamics into population-scale health studies that typically only use pedometry (e.g., the Osteoarthritis Initiative44 or the UK Biobank45), and facilitates the development of more sensitive functional outcome measures for clinical trials. By automatically computing kinematics, OpenCap is not susceptible to errors introduced by between-experimenter variance in motion capture marker placement46. This could reduce variability in multi-center studies47 and enable movement data to be compiled into homogeneous, sharable datasets that are useful to the machine learning community. Importantly, OpenCap’s portability will enable studies of populations that are often underrepresented in movement research due to time and geographic constraints.
OpenCap uses machine learning to improve the fidelity of video-based kinematic estimates and physics-based modeling to maintain generalizability (Figure 2). We combined a deep learning model suited for time series data with a constrained biomechanical model to estimate 3D kinematics from video keypoints that, alone, are insufficient to characterize 3D kinematics. Our deep learning model predicts a comprehensive set of anatomical markers from the sparse video keypoints labeled in common computer vision datasets (e.g., COCO48). Using the predicted anatomical markers, instead of video keypoints, with our biomechanical model improved kinematic accuracy, averaged across all degrees of freedom, by 3.4° (see Methods and Table S2 in Supplementary Information). These improvements were greatest for the hip flexion, pelvic tilt, and lumbar flexion degrees of freedom (4.9–32.6° improvement), which are susceptible to large angular errors due to the sparsity of video keypoints between the hips and shoulders. Additionally, the deep learning model architecture provides temporal consistency, making OpenCap more robust to brief occlusions or mis-identified keypoints. Finally, despite some task-dependent tuning of the problem formulation, muscle-driven dynamic simulations are a more generalizable approach for estimating kinetics than an end-to-end machine learning approach. This enabled us to study the dynamics of different movements without training data for each activity.
The accuracy of OpenCap’s kinematic and kinetic estimates is similar to state-of-the art markerless motion capture solutions. OpenCap’s kinematic error (range of root mean squared error [RMSE] across lower-extremity degrees of freedom: 2.0–10.2°) is similar to errors reported for inertial-measurement-unit-based approaches (RMSE: 2.0–12° for walking, running, and daily living activities18,49–55) and commercial and academic video-based systems with eight cameras (RMSE: 2.6–11° for walking, running, and cycling activities30,56). Furthermore, in contrast with most inertial-measurement-unit-based approaches, OpenCap estimates global translations (e.g., pelvis displacement), enabling estimation of whole-body measures like center-of-mass trajectory. Interestingly, kinematic estimates did not substantially improve when using more than two cameras (see Methods and Table S2 in Supplementary Information), suggesting that two cameras are sufficient for analyzing activities like those included in this study. To our knowledge, there is no previous example of computing whole-body kinetics from video alone; however, OpenCap’s kinetic estimates are comparable to inertial-measurement-unit-based approaches. For example, OpenCap’s root mean square errors in ground reaction force (1.5–11.1% bodyweight) and lower-extremity joint moment (0.3–1.7% bodyweight*height) predictions during walking (Table S3–S4 in Supplementary Information) are comparable to those resulting from using a 17-sensor inertial measurement unit suit (1.7–9.3% bodyweight and 0.5–2.2% bodyweight*height, respectively18,57). OpenCap also predicted the first peak knee adduction moment during walking with 44% higher accuracy than a machine learning model trained specifically to predict this measure from marker positions that could be extracted from video23.
OpenCap transforms the outputs of pose detection algorithms into valuable insights for studying human movement. We designed OpenCap to integrate different pose detection algorithms, and we found only minor differences in kinematics when testing different algorithms (see Methods and Tables S1–S2 in Supplementary Information). With the recent advances in joint center estimation from single-view25,29,58,59 and multi-view25–29 video, we expect OpenCap’s accuracy in estimating kinematics and kinetics to improve as more accurate pose estimation algorithms are released. By sharing our data and source code, we encourage researchers to benchmark their models using our data and to contribute to OpenCap’s development by adding support for their models.
Our study has several limitations. First, we tested OpenCap’s ability to estimate informative kinetic measures by having healthy individuals simulate different movement patterns associated with pathology or treatment. While the simulated movements were similar to those reported in the populations of interest (see Methods: Applications and Statistics), and OpenCap could distinguish differences in kinetics between these simulated conditions, future work is needed to validate these measures in the populations of interest. Second, the deep learning model that augments our 3D marker set may not generalize to activities outside of the distribution of activities that it was trained on. We generated the training data for this model using standard OpenSim kinematics data, so additional datasets could be added to the training set in the future. Additionally, estimating kinetics requires some task-dependent user inputs, which is a limitation of any optimization-based muscle-driven simulation. We have provided optimization problem formulations that work well for several activities. Overall, if future applications require high accuracy rather than generalizability, OpenCap’s accuracy could likely be improved with task-specific tuning of the deep learning model and optimization problem formulation.
In conclusion, OpenCap allows non-experts to analyze human movement dynamics in an order of magnitude less time and for several orders of magnitude less money than was previously possible with marker-based motion capture and force plates. We expect that OpenCap will catalyze large-scale studies of human movement, the sharing of motion datasets, and the translation of movement biomarkers into clinical practice.
Methods
1. Design
OpenCap comprises several steps to estimate movement dynamics from videos. These steps include calibrating cameras, collecting and processing videos, estimating marker positions, estimating kinematics, and generating physics-based dynamic simulations of movements. This pipeline is implemented in Python (v3.7.10). OpenCap’s web application guides users through each step, and cloud instances are used for computing (Figure 2).
a. Camera calibration
OpenCap models the iOS device cameras using a fifteen-parameter pinhole camera model60 and computes parameters using OpenCV61. At the beginning of a data collection, OpenCap loads the pre-computed intrinsic parameters related to each device’s camera hardware and recording settings (principal point, focal length, and distortion parameters) from a database that we created of recent iOS devices. Next, the web application guides users to place a checkerboard in view of all cameras, and OpenCap automatically computes the extrinsic parameters (camera transformation relative to the global frame) from a single image of a checkerboard. We used a precision-manufactured 720x540 mm checkerboard to pre-compute the intrinsic camera parameters for each device in our database (see Supplementary Information for details about intra- and inter-phone intrinsic parameter testing). A 210x175 mm checkerboard printed on A4 paper and mounted to a flat surface is sufficient for computing extrinsic camera parameters during each data collection. We found minimal kinematic differences when using the printed checkerboard, compared to the precision-manufactured checkerboard, to calibrate the cameras (see Supplementary Information).
b. Video collection and pose estimation
After calibration, users can proceed with simultaneously recording videos on all devices through the web application. Videos are recorded at a resolution of 720x1280 pixels, a frame rate of 60 Hz, and with the camera focus distance set to a fixed value.
Recorded videos are then processed using video pose detection algorithms. OpenCap currently supports two algorithms: OpenPose20 and HRNet62–65. These algorithms were selected due to performance and the inclusion of foot keypoints. For each video, and at each time frame, both algorithms return the two-dimensional (2D) position of body keypoints as well as a confidence score (between 0 and 1) indicating the confidence of the algorithm in the keypoint position. Twenty body keypoints are included for further analysis (neck, mid hip, left and right shoulders, hips, knees, ankles, heels, small and big toes, elbows, and wrists). OpenCap implements custom algorithms for processing 2D keypoint positions (e.g., handling keypoint occlusion) and time synchronizing them across videos using cross-correlations of keypoint velocities (see Supplementary Information for details).
c. Triangulation and marker-set augmentation
OpenCap triangulates the synchronized 2D video keypoint positions to compute 3D positions. OpenCap uses a Direct Linear Transformation algorithm for triangulation66, and weights the contribution of individual cameras in the least-squares problem with the corresponding keypoint confidence score56. There are two major limitations of using 3D keypoint positions triangulated from video for biomechanical analysis. First, the video keypoint set is not sufficient to fully define the kinematics of all degrees-of-freedom of the body segments. Tracking these limited keypoints using a model with biomechanical joint constraints mitigates this issue for some, but not all body segments. For example, keypoints at the hips and shoulders are insufficient for robustly determining sagittal-plane hip, pelvis, and lumbar kinematics. Second, most pose estimation algorithms identify keypoints on a frame-by-frame basis, so the resulting 3D keypoint trajectories are often physically unrealistic, especially in the presence of misidentified or occluded keypoints.
To overcome these limitations, we trained two long short-term memory (LSTM) networks to predict the 3D positions of 43 anatomical markers from the 3D positions of the 20 triangulated video keypoints. The set of anatomical markers corresponds to what is commonly used for marker-based motion capture67 to robustly determine 3D joint kinematics. We chose LSTM networks as they leverage time series data, which may improve the temporal consistency of the output marker position trajectories. We trained two LSTM networks: an arm model to predict the positions of eight arm markers from the positions of nine arm and torso keypoints, and a body model to predict the positions of 35 body markers from the positions of 13 lower-limb and torso keypoints. Both models also use height and weight as inputs. To train the networks, we synthesized corresponding pairs of 3D video keypoints and 3D anatomical markers from 108 hours motion capture data processed in OpenSim from published biomechanics studies68–77 (see Supplementary Information for details on dataset generation). We split the data into a training set (∼80%), validation set (∼10%), and test set (∼10%). Prior to training, we expressed the 3D positions of each marker with respect to a root marker (the midpoint of the hip keypoints), normalized the 3D positions by the subject’s height, sampled at 60 Hz, split the data into non-overlapping time-sequences of 0.5 s, and added Gaussian noise (standard deviation: 18 mm) to each time step of the video keypoint positions based on a range of previously reported keypoint errors23,25,28. For both models, we tuned hyperparameters using a random search. The RMSEs on the test set were 8.0 and 15.2 mm for the body and arm model, respectively (see Supplementary Information for details about model architecture and training). In practice, OpenCap uses both LSTM networks to predict root-centered arm and body anatomical marker positions from root-centered 3D video keypoints. It then adds the root keypoint position to all predicted positions.
d. Physics-based modeling and simulation
After calibration, OpenCap’s web application guides users to record the participant in a standing neutral pose. OpenCap uses the anatomical marker positions estimated from the neutral pose to scale a musculoskeletal model to the participant’s anthropometry using OpenSim’s Scale tool. OpenCap uses the musculoskeletal model from Lai et al.67,78 with modified hip abductor muscle paths according to Uhlrich et al.77. The musculoskeletal model comprises 33 degrees of freedom (pelvis in the ground frame [6], hips [2x3], knees [2x1], ankles [2x2], metatarsophalangeal joints [2x1], lumbar [3], shoulders [2x3], and elbows [2x2]). Note that since no markers are attached to the toes, no reliable estimates of metatarsophalangeal joint kinematics can be obtained. The metatarsophalangeal joint is nevertheless included when generating tracking simulations, since modeling that joint improves knee mechanics in muscle-driven simulations79. The musculoskeletal model is driven by 80 muscles actuating the lower-limb coordinates, 13 ideal torque motors actuating the lumbar, shoulder, and elbow coordinates, and six contact spheres per foot modeling food-ground contacts80,81. Raasch’s model82,83 is used to describe muscle excitation-activation coupling, and a Hill-type muscle model84,85 is used to describe muscle-tendon dynamics and the dependence of muscle force on muscle fiber length and velocity. Skeletal motion is modeled with Newtonian rigid body dynamics and smooth approximations of compliant Hunt-Crossley foot-ground contacts86,87. The dynamics of the ideal torque motors are described using linear first-order approximations of a time delay81. To increase computational speed, muscle-tendon lengths and velocities, and moment arms are defined as a polynomial function of joint positions and velocities88. The polynomial coefficients are fit to the output from OpenSim’s Muscle Analysis tool applied to 5000 randomly varied lower limb postures. Muscles are represented by ninth-order or lower polynomials, with RMSE of muscle-tendon length and moment arm lower than 1.5 mm compared to the original model.
After scaling, users can record any movement through OpenCap’s web application. OpenCap then uses the anatomical marker positions estimated from the recorded videos and LSTM network to compute joint kinematics using OpenSim’s Inverse Kinematics tool and the scaled musculoskeletal model. Users can visualize the resulting 3D kinematics in the web application.
Finally, OpenCap can estimate kinetics using muscle-driven tracking simulations of joint kinematics. The tracking simulations are formulated as optimal control problems that aim to identify muscle excitations that minimize a cost function subject to constraints describing muscle and skeleton dynamics. The cost function J (Equation 1) includes squared terms for muscle activations (a) and excitations of the ideal torque motors at the lumbar, shoulder, and elbow joints (etm). It also includes tracking terms (squared difference between simulated and reference data), namely tracking of experimental joint positions (), joint velocities (
), and joint accelerations (
):
where t0 and tf are initial and final times, wi with i = 1,…,5 are weights, and t is time. Experimental joint positions, velocities, and accelerations are low-pass filtered using fourth-order, zero-lag Butterworth filters (default cutoff frequencies are 12 Hz for gait trials and 30 Hz for non-gait trials). Each cost term is scaled with empirically determined weights. To avoid singular arcs89, a penalty function is appended to the cost function with the remaining control variables81,90. Note that the optimal control problem formulation can be tailored to the activity of interest to incorporate activity-based knowledge by, for instance, adjusting the cost function, constraints, and filter settings (see Supplementary Information). The optimal control problems are formulated in Python with CasADi91 (v3.5), using direct collocation and implicit formulations of the muscle and skeleton dynamics81. Algorithmic differentiation is used to compute derivatives90, and IPOPT is used to solve the resulting nonlinear programming problems92 with a convergence tolerance of 1e-4 (all other settings are kept to default).
2. Validation
a. Participants and experiment
To validate OpenCap against gold standard kinematic and kinetic measures, we measured ten healthy adults (6 female and 4 male; age = 27.7±3.6 [23–35] years; body mass = 69.2±11.0 [59.0–92.9] kg; height = 1.74±0.11 [1.60–1.96] m; mean ± standard deviation [range]) performing multiple activities in a motion capture laboratory. All participants provided written informed consent before participation. The study protocol was approved and overseen by the Institutional Review Board of Stanford University (IRB00000351). We conducted the experiment in accordance with this approved protocol and relevant guidelines and regulations.
Participants were instructed to perform four activities in a natural (i.e., self-selected) and modified way during data collection: i) walking naturally and with a trunk sway modification (trunk leaned laterally over stance leg), ii) performing five squats naturally and then asymmetrically (reduced force under the left foot), iii) performing five sit-to-stands naturally and then with increased trunk flexion (forward lean when rising), and iv) performing three drop jumps naturally and then asymmetrically (reduced force under the left foot when landing).
b. Experimental data
We measured ground truth kinematics, ground reaction forces, and muscle activity with optical motion capture, force plates, and electromyography. An eight-camera motion capture system (Motion Analysis Corp., Santa Rosa, CA, USA) tracked the positions (100 Hz) of 31 retroreflective markers placed bilaterally on the 2nd and 5th metatarsal heads, calcanei, medial and lateral malleoli, medial and lateral femoral epicondyles, anterior and posterior superior iliac spines, sternoclavicular joints, acromia, medial and lateral epicondyles of the humerus, radial and ulnar styloid processes, and the C7 vertebrae. Twenty additional markers were used to aid in segment tracking. Ground reaction forces were synchronously measured (2000 Hz) using three in-ground force plates (Bertec Corp., Columbus, OH, USA). Wireless electromyography electrodes (Delsys Corp., Natick, MA, USA) measured muscle activity (2000 Hz) from the vastus lateralis and medialis (electromyography data from 14 other lower-extremity muscles are shared with the dataset but not analyzed here). We used OpenCap to record video from five smartphones (iPhone 12 Pro, Apple Inc., Cupertino, CA, USA). The phones were positioned 1.5 m off the ground, 3 m from the center of the force plates, and at ±70°, ±45°, and 0°, where 0° faces the participant. Unless otherwise noted, the validation results used only the two ±45° cameras. A precision-manufactured, 720x540 mm checkerboard was used for computing the extrinsic parameters during OpenCap’s camera calibration step.
Marker, force, and electromyography data were filtered using a fourth-order, zero-lag Butterworth filter. Marker and force data were low-pass filtered (walking: 6 Hz, squat: 4 Hz, sit-to-stand: 4 Hz, and drop jump: 30 Hz). These frequencies were selected as the frequency that retained 99.7% of the cumulative signal power of the Fourier-transformed marker trajectories93. Electromyography data were band-pass filtered (30-500 Hz), rectified, and low-pass filtered (6 Hz). Electromyography data were normalized to maximum activation trials including maximum height jumps, sprinting, and isometric and isokinetic ankle dorsiflexion, knee flexion, hip abduction exercises94.
c. Kinematics and kinetics
Laboratory-based (later referred to as Mocap) kinematic and kinetic data were estimated from measured marker and force plate data using OpenSim 4.3. We used the same modeling and simulations pipeline as OpenCap to scale the musculoskeletal models and estimate joint kinematics from measured marker data (see Methods: Design: Physics-based modeling and simulation). Joint kinetics were then estimated from joint kinematics (filtered at same frequencies as force plate data) and force plate data using OpenSim’s Inverse Dynamics tool.
OpenCap kinematic and kinetic data were estimated using the two 45° cameras and the HRNet pose detection algorithm. This setup combines simplicity, performance, and a permissible open-source software license. It was selected after conducting a sensitivity analysis studying the effect of using different camera configurations (two, three, and five cameras) and pose detection algorithms (OpenPose with default settings, OpenPose with high accuracy settings, and HRNet) on predicted anatomical marker positions and joint kinematics. See Supplementary Information and Methods: Validation: Validation Results for details about the sensitivity analysis and pose detection algorithm settings.
d. Error analysis
We evaluated the performance of OpenCap against Mocap by quantifying errors in anatomical marker positions, joint kinematics, ground reaction forces, and joint kinetics.
We quantified errors in 3D anatomical marker positions using mean per marker error (Euclidean distance). We report errors for 17 anatomical markers (the C7 vertebrae and the left and right acromia, anterior and posterior superior iliac spines, medial and lateral femoral epicondyles, medial and lateral malleoli, calcanei, and second and fifth metatarsal heads). Prior to error analysis, we synchronized and aligned Mocap and OpenCap position data by removing the time delay that minimized the mean difference between marker positions (averaged over all markers and time steps), then subtracting this average position offset from the OpenCap positions.
We quantified errors in 3D joint kinematics using MAE. We report errors for 18 rotational degrees of freedom (pelvis rotations [3], hips [2x3], knees [2x1], ankles [2x2], and lumbar [3]) and three translational degrees of freedom (pelvis translations).
We quantified errors in 3D ground reaction forces using MAE normalized by bodyweight. We also expressed errors as percent of range of the measured signal over each trial. Prior to quantifying errors, we filtered ground reaction forces from OpenCap using the same filters as for the measured ground reaction forces (see Methods: Validation: Experimental data).
We quantified errors in 3D joint kinetics using MAE normalized by bodyweight times body height. We report errors for 15 rotational degrees of freedom (hips [2x3], knees [2x1], ankle [2x2], and lumbar [3]). It is important to note that while joint moments estimated from inverse dynamics are considered gold standard, they include non-physical pelvis residual forces and moments to compensate for the inconsistency between model-based kinematics and measured ground reaction forces. In contrast, muscle-driven simulations are dynamically consistent and do not include pelvis residuals. Thus, the differences between Inverse Dynamics and OpenCap-estimated joint moments are not entirely attributable to error in the OpenCap pipeline.
e. Validation Results
The marker error, averaged across markers and activities, was 32 mm using the two-camera HRNet setup. Our sensitivity analysis demonstrated that OpenCap’s accuracy remained consistent across different pose detectors and additional cameras. Marker error was 31 and 35 mm when using OpenPose with high accuracy and default setting, respectively. Using three cameras did not improve accuracy, but using five cameras mildly reduced error (29 mm for HRNet). Marker error was larger for the upper extremity (39 mm) and pelvis (38 mm) than for the lower extremity (27 mm) using the two-camera HRNet setup. Detailed results of the sensitivity analyses are presented in Table S1 of Supplementary Information.
The kinematic MAE for the two-camera HRNet setup, averaged across degrees of freedom and activities, was 4.5° (range=1.7–10.3°) and 12.3 mm (range=5–20.3 mm) for the 18 rotational and three translational degrees of freedom, respectively (Table 1). Our sensitivity analysis showed that kinematic errors were similar when using the high accuracy and default OpenPose settings (4.3° and 4.7°, respectively), and when adding cameras (improvement of less than 0.3°). We also investigated the effect of using video keypoints instead of anatomical markers to estimate joint kinematics. Kinematic errors were 3.4° worse on average for the two-camera HRNet setup when using the video keypoints instead of the anatomical markers. This was primarily due to 12.1–39.2° errors at the lumbar extension, pelvic tilt, and hip flexion degrees of freedom, due to the limited information in the video keypoint marker set for distinguishing between rotations at these joints. Detailed results of the sensitivity analyses are presented in Table S2 of Supplementary Information. Average kinematic waveforms estimated using OpenCap and Mocap are presented in Figures S1-S4 of Supplementary Information.
The ground reaction force MAE, averaged across directions and activities was 6.2% bodyweight. It was 11.4% bodyweight in the vertical direction, 3.5% bodyweight in the anterior-posterior direction, and 3.8% bodyweight in the medio-lateral direction (Table 1). The joint moment MAE, averaged across degrees of freedom and activities, was 1.2% bodyweight*height (Table 1). Detailed results are presented in Table S3–S4 of Supplementary Information, and average ground reaction force and joint moment waveforms estimated using OpenCap and Mocap are presented in Figures S5–S12 of Supplementary Information.
3. Applications and Statistics
We assessed OpenCap’s ability to estimate kinetic measures related to musculoskeletal pathology in three applications that represent clinical use cases. Unless otherwise noted, these analyses were performed on the 10-subject dataset described in Methods: Validation. All statistical analyses were performed in Python (v3.7.10) using the scipy95 (v1.5.4), statsmodels96 (v0.13.2), and pingouin97 (v0.5.2) packages. We compared conditions within and between measurement modalities using r2, MAE, two-sided paired t tests (alpha=.05), and two-sided Wilcoxon signed rank tests. Prior to conducting a t test, we tested for normality using a Shapiro Wilkes test, and we used a Wilcoxon signed rank test to compare non-normally distributed data. To prevent inflated Type 1 error from multiple comparisons, we report corrected P-values after controlling for the false discovery rate using the Benjamini Hochberg procedure98. We evaluated the post-hoc power of t tests and Wilcoxon signed rank tests using the sample size, alpha=.05, and the observed effect size. We evaluated performance on classification tasks using AUC and binary classification accuracy at the threshold that maximized the true positive rate minus the false positive rate. Unless otherwise noted, values are reported as mean ± standard deviation.
In the first application, we assessed the peak knee adduction moment and peak medial knee contact force during walking. Participants walked naturally and with a trunk sway modification, which typically alters medial knee loading31. Participants walked with 15° more trunk sway on average during the trunk sway compared to the natural condition, which is similar to the 10–13° of trunk sway reported in gait modification studies31,99. We computed peaks of both loading measures during the first half of the stance phase using the Joint Reaction Analysis tool in OpenSim (see Supplementary Information for details), which uses kinematics, ground reaction forces, and muscle forces as inputs. For OpenCap, we used the outputs of the muscle-driven dynamic simulation for this analysis, and for Mocap, we used the OpenSim Static Optimization tool to estimate muscle forces. We first determined how accurately OpenCap could estimate the peak knee adduction moment and how it varies among gait patterns and individuals. For each walking condition, we averaged the peak knee adduction moment across the three trials for each individual and compared between OpenCap and Mocap using r2 and MAE. We then determined whether OpenCap could detect group changes in both loading measures from a gait modification similarly to Mocap. For each measurement modality, we used either a two-sided paired t test or a Wilcoxon signed rank test to evaluate the changes from baseline, and we computed the post-hoc power of each test. Finally, we evaluated whether OpenCap correctly identified an increase or decrease in peak knee loading measures for each individual, using the Mocap estimate as ground truth.
In the second application, we evaluated lower-extremity joint moments while rising from a 40 cm chair. Participants stood naturally and with increased trunk flexion, which can shift the muscle force demand from the knee extensors to the hip extensors and ankle plantarflexors39. During the increased trunk flexion condition, participants stood with a 42±8° of trunk flexion, which is similar to the 47° reported in a cohort of older adults with functional limitations100. For three repetitions per condition, we averaged the hip extension, knee extension, and ankle plantarflexion moments over the rising phase, then averaged these values across repetitions. To evaluate OpenCap’s ability to detect group changes between conditions, we compared the moment changes from baseline to trunk-lean using two-sided paired t tests for both OpenCap and Mocap. We then conducted a post-hoc power analysis for each measurement modality. To determine OpenCap’s ability to identify individuals with low knee extensor moments during this motion, we compared each participant’s average knee extension moment for each condition between OpenCap and Mocap using r2 and MAE.
In the third application, we assessed the between-limb symmetry of knee extensor muscle activation while squatting. Participants squatted naturally and asymmetrically, which can elicit asymmetrical knee extensor force generation7. We first performed an in-lab experiment to compare peak vasti muscle (knee extensors) activation measured with electromyography to peak activation estimated with OpenCap and Mocap. Since there is no change in muscle strength between these conditions, a change in muscle activation between conditions is a more easily measured surrogate for a change in muscle force. For OpenCap, muscle activations were outputs of the muscle-driven tracking simulations, whereas for Mocap, muscle activations were estimated using OpenSim’s Static Optimization tool. We first averaged the activation of the vastus medialis and vastus lateralis, then extracted the peak value over a squat (standing to standing again). We calculated the peak vasti activation symmetry index between the left and right leg (Equation 2) and averaged across three repetitions in each condition:
where ainvolved is the peak activation of the left vasti (reduced force under left foot during asymmetric condition) and auninvolved is the peak activation of the right vasti. The symmetry index is larger than one when the left peak vasti activation is lower than the right peak vasti activation, which would be expected in the asymmetric condition. On average, during the asymmetric condition compared to the natural condition, our participants squatted with a 0.53±0.32 greater symmetry index measured by electromyography; this is similar to the 0.51 greater asymmetry in vasti strength reported in individuals one month after a total knee replacement101. We determined OpenCap’s ability to classify symmetric vs. asymmetric squats using AUC and classification accuracy, with ground truth symmetry labels determined from electromyography based on a symmetry index threshold (1.15) that predicts functional deficits following anterior cruciate ligament surgery41. We also computed the AUC and accuracy for simulated muscle activations from Mocap.
Finally, we performed a field study where a clinician used OpenCap to evaluate knee extension moment symmetry in 100 individuals outside of the laboratory (41 female and 59 male; age = 29.7±9.2 [18–67] years, body mass = 69.0±11.5 [50-109] kg; height = 1.74±0.09 [1.45–1.97] m; mean ± standard deviation [range]). We used a 210x175 mm checkerboard printed on A4 paper and mounted to plexiglass for camera calibration. Participants performed natural squats and asymmetric squats. All participants provided written informed consent before participation. The study protocol was approved and overseen by the Institutional Review Board of Stanford University (IRB00000351). We conducted the experiment in accordance with this approved protocol and relevant guidelines and regulations. First, we evaluated OpenCap’s ability to detect a squat with a between-limb asymmetry in the peak knee extension moment. For each participant, we computed the peak knee extension moment for three repetitions per condition, computed the peak knee extension moment symmetry index (Equation 2), and averaged across the repetitions in each condition. To determine the classification performance, we computed AUC and accuracy, with ground truth labels being the instructed condition (i.e., natural [assumed to be symmetric] vs. asymmetric squats). Second, we evaluated OpenCap’s ability to detect between-condition changes in knee extension moment symmetry, simulating the ability to detect improvements in symmetry that would be expected to occur over time. To simulate improved symmetry, we subtracted each participant’s symmetry index averaged over the repetitions of the natural condition from their symmetry index averaged over repetitions of the asymmetric condition (a positive value indicates an improvement in symmetry). To simulate unchanged symmetry, we averaged the difference in symmetry index between each combination of the asymmetric squat repetitions. We computed the AUC and accuracy of this change in symmetry measure using the known class (i.e., improved symmetry or unchanged symmetry) as ground truth.
Author contributions
S.D.U, A.F., Ł.K., J.L.H., and S.L.D. conceptualized the software and designed the study; S.D.U., A.F., and Ł.K. developed the software with input from M.K. and A.S.C.; S.D.U., A.F., and J.M. collected the data; S.D.U., A.F., and Ł.K. analyzed the data with input from A.C., J.L.H., and S.L.D; S.D.U. and A.F. wrote the manuscript with input from Ł.K., J.M., A.S.C., J.L.H., and S.L.D.; all authors revised the manuscript and approved of the final version.
Competing interests
Stanford University has filed for a patent related to the work, titled “OpenCap: open-source software for estimating the kinematics and kinetics of human movement from smartphone videos” on behalf of A.F., S.D.U., Ł.K., J.L.H, and S.L.D. These authors have no other competing interests. J.M., M.K., and A.S.C. have no competing interests.
Code availability
The source code will be made available at https://github.com/stanfordnmbl/opencap, and the software can be used through our web and mobile applications by visiting https://opencap.ai. These applications and the cloud computing that enables them will be freely available to the research community for the foreseeable future.
Data availability
The video and laboratory data are available at https://simtk.org/projects/opencap.
Acknowledgements
This work was supported by The Wu Tsai Human Performance Alliance, Philips Healthcare, and the National Institute of Health (grants 1P41EB027060-01A1 and 1R01AR077604-01).
References
- 1.↵
- 2.↵
- 3.↵
- 4.↵
- 5.↵
- 6.↵
- 7.↵
- 8.↵
- 9.
- 10.↵
- 11.↵
- 12.↵
- 13.↵
- 14.↵
- 15.↵
- 16.↵
- 17.↵
- 18.↵
- 19.↵
- 20.↵
- 21.↵
- 22.↵
- 23.↵
- 24.↵
- 25.↵
- 26.
- 27.
- 28.↵
- 29.↵
- 30.↵
- 31.↵
- 32.↵
- 33.
- 34.↵
- 35.↵
- 36.↵
- 37.↵
- 38.↵
- 39.↵
- 40.↵
- 41.↵
- 42.↵
- 43.↵
- 44.↵
- 45.↵
- 46.↵
- 47.↵
- 48.↵
- 49.↵
- 50.
- 51.
- 52.
- 53.
- 54.
- 55.↵
- 56.↵
- 57.↵
- 58.↵
- 59.↵
- 60.↵
- 61.↵
- 62.↵
- 63.
- 64.
- 65.↵
- 66.↵
- 67.↵
- 68.↵
- 69.
- 70.
- 71.
- 72.
- 73.
- 74.
- 75.
- 76.
- 77.↵
- 78.↵
- 79.↵
- 80.↵
- 81.↵
- 82.↵
- 83.↵
- 84.↵
- 85.↵
- 86.↵
- 87.↵
- 88.↵
- 89.↵
- 90.↵
- 91.↵
- 92.↵
- 93.↵
- 94.↵
- 95.↵
- 96.↵
- 97.↵
- 98.↵
- 99.↵
- 100.↵
- 101.↵