Abstract
OBJECTIVE To develop a clinical prediction model for diagnosing mild stroke/transient ischemic attack (TIA) in first-contact patient settings. DESIGN: Retrospective study design utilizing logistic regression modeling of patient clinical symptoms collected from patient chart histories and referral data. SETTING: Regional fast-track TIA clinic on Vancouver Island, Canada, accepting referrals from emergency departments (ED) and general practice (GP). PARTICIPANTS: Model development: 4187 ED and GP referred patients from 2008–2011 who were assessed at the TIA clinic. Temporal hold-out validation: 1953 ED and GP referred patients from 2012–2013 assessed at the same clinic. OUTCOMES: Diagnosis of mild stroke/TIA by clinic neurologists. RESULTS: 123 candidate predictors were assessed using univariate feature selection for inclusion in the model, and culminated in the selection of 50 clinical features. Post-hoc investigation of the selected predictors revealed 12 clinically relevant interaction terms. Model performance on the temporal hold-out validation set achieved a sensitivity/specificity of 71.8% / 72.8% using the ROC01 cutpoint (≥ 0.662), and an AUC of 79.9% (95% CI, 77.9%–81.9%). In comparison, the ABCD2 score (≥ 4) achieved a sensitivity/specificity of 70.4% / 54.5% and an AUC of 67.5% (95% CI, 65.2%–69.9%). The logistic regression model demonstrated good calibration on the hold-out set (β0 = −0.257); βlinear = 1.047).
CONCLUSIONS The developed diagnostic model performs better than the ABCD2 score at diagnosing mild stroke/TIA on the basis of clinical symptoms. The model has the potential to replace the use of the prognostic ABCD2 score in diagnostic medical contexts in which the ABCD2 score is currently used, such as patient triage.
Introduction
Stroke is one of the leading causes of death and long-term disability in North America.1,2 Many strokes are preceded by transient ischemic attacks (TIA), or “mini-strokes”.3 The continuum from TIA to ischemic stroke has been referred to as acute cerebrovascular syndrome.4 It is estimated that patients diagnosed with a TIA have a 10% risk of recurrent stroke within 90 days (5% within 2 days) if untreated, with 50% of recurrent strokes occurring within the first 48 hours.1 Early treatment is associated with better patient outcomes, and is the target of many stroke best practice guidelines.1,2
In response to the growing awareness of the importance of TIA management for stroke prevention, Canadian stroke best practices recommend that all patients suspected of a TIA be assessed by stroke specialists within 48 hours of symptom onset.1,2 TIA clinics are frequently overburdened by referral volumes making the urgent triage of such high-priority patients difficult. Successful adoption of such recommendations, therefore, rest on the ability of TIA clinics to more accurately triage patients.
Diagnosis of ACVS can be challenging for TIA clinic staff as ACVS phenotypes can vary widely between patients depending upon the brain region affected. Compounding this difficulty many noncerebrovascular conditions, such as acute migraine, share many of the same features as ACVS. Such noncerebrovascular conditions, or mimic conditions,5 make it difficult for clinic staff to presumptively diagnosis referrals in order to prioritize true ACVS referrals for clinic intake. Approximately 30–60% of referrals to specialized TIA/stroke units are ultimately diagnosed as mimic conditions.6–12 Mimic referrals overburden the limited capacity and resources of specialized TIA/stroke units, and this increases patient wait times and delays the timely arrival of ACVS patients to these units. Assisting TIA clinic staff in differentiating ACVS from mimic conditions could serve as a first step toward improving TIA management.
Several studies have examined the viability of using the ABCD2 score to diagnosis ACVS.8,9 Originally developed as a prognostic tool for predicting risk of recurrent stroke after a TIA, the ABCD2 score has been found to be moderately diagnostic of ACVS.8 This is due in large part to the presence of several clinical features (e.g., age, blood pressure, clinical presentation) within the ABCD2 score that are also diagnostic of ACVS. Previous studies9,13 have reported low sensitivities (60.3–67%, cutpoint ≥ 4) for the score to diagnosis of ACVS and would appear to limit its clinical usefulness. This is unfortunate as approximately one-third of TIA clinics triage referrals on the basis of the ABCD2 score.14
Our research group has been working on a clinical prediction rule (CPR) to differentiate ACVS from mimic patients on the basis of patients’ history of presenting symptoms. Such a CPR would have value in the triaging of patients referred to TIA clinics. A deficiency of using the prognostic ABCD2 score as a diagnostic tool is that due to its design it contains no mimic predictors, as the diagnosis of ACVS has already been established prior to prognostic use. When used as a diagnostic tool the ABCD2 score structurally only provides a presumptive diagnosis of ACVS; all other diagnoses are implicitly defined as not ACVS. In contrast, we approached the problem of ACVS recognition as a matter of differential diagnosis. We appreciated that adopting such a framework will necessarily result in a model containing a large number of clinical features. This design decision runs counter to the principles of simplicity and brevity underlying most clinical support tools currently used in clinical practice. Instead, our aim when constructing our model was to focus exclusively on the task of differentiating ACVS and mimic conditions. To the best of our knowledge, no studies currently exist in the literature that approaches the issue of ACVS recognition as we have.
To develop our CPR we will assemble a dataset containing key clinical predictors previously identified in the literature and informed by current best practice as symptoms of ACVS and mimic conditions. From the dataset, we will use logistic regression modelling guided by clinical insights to construct our CPR. Finally, we will validate the model on a temporal hold-out dataset. The contributions of our study to the ACVS literature are to develop a CPR to specifically differentiating ACVS and mimic conditions
Methods
Settings and Participants
We utilized a retrospective study design to construct our study dataset. Patient clinical records from the Stroke Rapid Assessment Unit (SRAU), Victoria, B.C., from 2008–2013 were used to construct the dataset. The SRAU is a specialized outpatient stroke unit servicing most of the Vancouver Island (pop. 759,366). The SRAU receives referrals from emergency departments, general practice, and specialists (e.g., ophthalmologists). Median wait-time from referral to unit arrival was 4.397 days (IQR = 1.864–9.523). Only patient first time assessments at the SRAU during the time period were included in the dataset (N = 7823). The Health Research Ethics Board of Island Health (one of five provincial health authorities in British Columbia) approved this study. Funding for this study was provided by the Heart and Stroke Foundation, Genome British Columbia, Genome Canada, and Canadian Institutes of Health Research.
Data Preparation and Missing Data
The dataset consisted of patients’ reported event histories, risk factors (age, sex, hypertension, diabetes, etc.), ABCD2 scores (derived by adding patient diabetes status to the ABCD score recorded in the patients’ charts), and final diagnoses. Event histories consisted of two free text fields (chief complaint and history of presenting illness). These two free text fields (henceforth, history fields) consist of descriptions of the patients’ events as recounted by the patients. Effort is made by SRAU stroke nurses to record the patients’ histories using the same language and words provided by the patient. Patient histories are recorded prior to a complete neurological assessment and final diagnosis by SRAU neurologists.
History fields were codified by a simple text-mining procedure that used pattern matching, in conjunction with NegEx15 negation detection, to map keywords and phrases onto clinical symptoms. Processing of the history fields is described in full in Sedghi et al., 2015.16 The clinical symptoms that were codified were selected in advance of statistical analyzes. Symptoms to be codified were chosen that were either (a) well referenced in the ACVS literature (e.g., vertigo, unsteadiness), or (b) medical concepts that appeared in the patients’ history fields (i.e., data-driven symptoms selection; e.g., Electric sensation, Emotionally labile, etc.). Absence of keywords and phrases in records were coded as the absence of the symptom in the patients’ records.
The complete initial dataset contained a total of 121 variables (diagnosis, risk factors, ABCD2 score, and codified history fields). The only continuous variables in the dataset were age (years), and systolic and diastolic blood pressure (mmHg). All other predictors were coded as: absent/no = 0, present/yes = 1; for patient sex, female = 0 and male = 1; for ABCD2 = 0, 1, 2, 3, 4, 5, 6, 7, or missing.
Patient diagnoses were by unit neurologists during standard of care neurological assessment. The diagnosis field extracted from the patient records consisted of eight possible classifications: (a) Stroke, (b) Stroke–Probable, (c) Stroke–Possible, (d) TIA, (e) TIA–Probable, (f) TIA–Possible, (g) Mimic, (h) Other, and (i) Unknown. Mimic conditions5 have similar clinical presentations/symptoms as TIA/mild Stroke, but are non-cerebrovascular in origin (e.g., migraine, seizure). The Other classification represents either non-ischemic strokes (e.g., hemorrhagic stroke) or non-cerebral ischemic events (e.g., cranial nerve ischemia). Unknown represents situations in which a diagnosis could not be determined (e.g., not yet diagnosed). Cases in the dataset were restricted to patients for whom either a clinical or radiological diagnosis (e.g., brain imaging) could be determined. Cases with classifications of Other or Unknown were omitted from analysis due to the ambiguity surrounding the diagnosis (see Figure 1). The remaining classifications were dichotomized as either ACVS (incl. Stroke, Stroke-Probable, Stroke-Possible, TIA, TIA-Probable, TIA-Possible) or Mimic for the purposes of these analyzes.
Participant flow diagram detailing missing data by variable and data set.
Prior to analysis, the variables comprising the initial dataset were reviewed for possible clinical redundancies. After review it was decided that several of the concepts be grouped together as composite variables and added to the dataset. Syncope and LOC (Loss of Consciousness) were combined with a logical OR operator to create Syncope or Loss of consciousness (LOC)* (henceforth, an asterisk (*) denotes a composite variable); i.e., Syncope or Loss of Consciousness (LOC)* is coded as present if the patient reported either Syncope or Loss of Consciousness, or both. Similarly, the variables Anxiety and Stress were combined to create the single variable Anxiety or stress*. The variables Visual field, Left visual field, Left eye visual field, Right visual field, and Right eye visual field were combined to create Visual field deficit (either side)*. Finally, Involuntary movement, Shaking, and Tremor were combined to create a single composite variable, Involuntary movement, shaking, or tremor*.
Treatment of Missing Data
Prior to analysis, the data were examined for missing data and outliers (see Figure 1). Missing values were defined as any NULL values observed in the data extracted from the SRAU EMR database. The only outliers observed were two instances in which a diastolic blood pressure was recorded in excess of 700 mmHg. The only variables containing missing data were patient sex, systolic and diastolic blood pressure, and ABCD2 score. For these variables we assumed values were either not measured or recorded and thus were missing completely at random. With the exception of the ABCD2 score, listwise deletion was employed for the other variables prior to data analysis. In the case of the ABCD2 score, listwise deletion was applied to the data after feature selection was performed.
Training and Test Datasets
The full clinical dataset (N = 7823) was divided into training (2008–2011, N = 5096) and test (2012–2013, N = 2727) datasets. Figure 1 provides a summary of how patients from the training and test datasets were excluded due to missing data and diagnosis.
Inter-rater reliability was conducted on the training dataset to compare the accuracy of the text-mining procedure with that of manual coding of patient history fields. Due to the size of the training dataset (N = 5096) only 1% (N = 52) of the total cases were analyzed, with random case selection equally distributed across the four years (2008–2011, N = 13 cases/year). For each of the clinical variables coded from the history fields Gwet’s AC1 was computed.17 Median Gwet’s AC1 was 0.98 (IQR = 0.957–1).
Table 1 displays the demographic characteristics of the final training (2008–2011, N = 4187) and test (2012–2013, N = 2038) datasets after missing data was addressed, with the exception of the ABCD2 score.
Demographic characteristics of training and test data.
Feature Selection
The process of feature selection was conducted on the final training dataset (N = 4187). Each feature was considered as a univariate predictor. Feature selection was filter based18 and utilized a statistical association in conjunction with clinical expert review.18–20 For each of the 123 candidate predictors (i.e., 121 variables in the dataset, less the diagnosis and ABCD2 score, plus the additional 4 composite variables, i.e., 121 − 2 + 4 = 123) the bivariate association with the diagnosis variable was computed. Odds ratios were computed for each predictor by regressing each variable independently on the diagnosis using a logistic regression model with the variable under consideration as the sole predictor. The Wald p-value was used as the statistical test of significance. Alpha was set at 0.05 for each candidate predictor. No effort was made to control for multiple comparisons;21–23 that is, the decision as to a predictor’s spuriousness was decided upon the basis of domain knowledge (i.e., support for the predictor in the ACVS literature).
Feature selection followed two stages. In the first stage, variables with a significant p-value value (α = 0.05) would be preferentially (i.e., favourably) reviewed for selection on the basis of their clinical relevance. Two criteria disqualified significant variables from selection: (a) the variable was clinically ill-defined (i.e., non-descriptive), and (b) the variable clinically overlapped another significant variable that was better defined and clinically broader in application.
In the second stage, variables with non-significant associations with the diagnosis were examined. Only two criteria were used to qualify these variables for selection. First, non-significant variables could be selected to complete a related set of clinically relevant variables if other members of the set were statistically significant; this review resulted in the inclusion of only the variables associated with numbness and weakness. These variable sets cover multiple body regions (left and right side face, arm, and leg, along with any overall indication of the condition, numbness or weakness, respectively). If one body region was significant, the other regions were selected. The second criterion was that non-significant variables could be included if their clinical relevance was already well established in the literature, and the exclusion of the variable would be clinically questionable.
Model Construction
From the set of features chosen through this two-stage process, our next step was to include all selected features in a logistic regression model. Continuous predictors were standardized before model fitting using the following centers/standard deviations so as to allow for comparisons with the ABCD2 score: (a) age: 60 years of age / 10 years SD; (b) systolic blood pressure: 140 mmHg / 10 mmHg SD; and (c) diastolic blood pressure: 90 mmHg / 10 mmHg SD. Binary predictors were not standardized. Post-hoc interaction terms were then evaluated to explore how clinically meaningful interactions influenced model fit. Evaluation of interaction terms was on the basis of both statistical significance and clinical relevance. The full clinical model then comprised all selected variables plus a small number of clinically meaningful and significant interactions.
For the full clinical model three different cutpoints were calculated on the training dataset (2008–2011): (a) maximum efficiency24—the cutpoint that maximizes accuracy; (b) maximum Kappa24,25—the cutpoint that maximizes Kappa (i.e., inter-rater agreement (Cohen’s Kappa) between a model’s predictions and the true class labels); and (c) ROC0125—the cutpoint that minimizes the distance between point (0,1) on the ROC plot and the ROC curves.
After fitting of the full clinical model was completed we created an extreme phenotype variant of the model. In this variant, patients with clinical diagnoses of either TIA-Possible or Stroke-Possible were excluded from the training dataset, and the model was refitted (N = 3722). This model was created to examine the influence of these edge cases on the model’s performance. Cutpoints for this model were calculated as previously described.
Once models were examined for performance, final models for the full clinical (N = 6225) and extreme phenotype (N = 5510) models were constructed by refitting the models on the combined test and training sets, and cutpoints determined.
Although we did not conduct a formal power analysis, we considered the large number of records in the training set (N = 4187) as sufficient to pursue a multivariate model. A commonly used guideline26 indicates that an effective sample size of 10 ‘events’ per parameter is adequate for a logistic regression model. Our training set achieves 1486 ‘events’ (i.e., the smaller of the number of ACVS and the number of Mimics).
Performance Evaluation
Performance of the full clinical model in differentiating ACVS patients from mimics was assessed on both the training dataset (in-sample performance) and the test dataset (out-of-sample performance). The test dataset was not used in the feature selection process so as to ensure an unbiased estimate of the full clinical model’s performance. Performance of the ABCD2 score for each dataset was also calculated so as to better contextualize the performance of the full clinical model. To evaluate the apparent performance of the full clinical model on the training set, cases with missing ABCD2 scores were removed after model fitting was completed. This was done to permit a direct comparison of the model’s performance with that of the ABCD2 score on the same cases. For the test dataset, cases with missing ABCD2 scores were removed before analysis. For the full clinical model, predicted probabilities were used to access model performance; for the ABCD2 score raw scores (range 0–7) were used.
Decision curve analysis (DCA) was conducted to assess the clinical utility of the full clinical model and nd the ABCD2 score.27–29 Although traditional measures of discrimination (e.g., sensitivity, C-statistic, etc.) and calibration assess model performance, they cannot indicate if a model is actually clinically useful.30,31 DCA attempts to quantify clinical utility using the concept of net benefit. Net benefit represents the number of true positives predicted by a model adjusted for by the number of false positives that have been weighted by the relative consequences (harm) of an unnecessary treatment relative to a missed treatment (i.e., p / (1 − p), where p is closely related to the concept of positive predictive value).27,30,32 Net benefit can also be understood as the gain in true positive cases detected by a model at a specific cutpoint, adjusted to be relative to a baseline “model” of assuming all cases to be negative (i.e., a “no treatment” or “none” model with no false positives).27
Analyzes were completed using the OptimalCutpoints (v1.1.3),25 PredictABEL (v1.2.2),33 ROCR (v1.0.7),34 pROC (v1.8),35 Hmisc (v3.17.4),36 rms (v4.5.0),37 immer (v0.5.0),38 dplyr (v0.5.0),39 and ggplot2 (v2.1.0)40 libraries in the R statistical language (v3.3.1).41
Results
Feature Selection
Feature selection results can be found in the Supplement. Table S1 displays the 123 candidate variables, along with their clinically related conditions, bivariate associations with the diagnosis (ACVS vs. Mimic), sorted by logistic regression Wald p value, and final selection decision. Odds ratios could not be calculated for two variables owing to low frequencies in the training set: Pallor (N = 3) and Collapse (N = 0).
During first stage selection (p < 0.05), 74 candidate variables were identified for review. Of these variables, 44 were retained, 19 were deemed clinically non-descriptive, 7 were constituents of the composite variables, and 4 were redundant with broader clinical concepts. Variables that were included as constituents of the composite variables (N = 7) were automatically excluded if the composite variable was significant. This occurred for the four composite variables, (a) Involuntary movement, shaking or tremor*, (b) Visual field deficit (either side)*, (c) Syncope or Loss of consciousness (LOC)*, and (d) Anxiety or stress*. The only clinically overlapping variables that were excluded were Face droop right, and Face droop left.
During second stage selection (p > 0.05), 47 candidate variables were identified for review. Of these variables, 35 were excluded, 4 were retained as clinically relevant, 6 were constituents of the composite variables, and 2 were retained to complete the numbness and weakness variable sets.
Of the 4 retained variables, three related to the visual domain, Curtain, Diplopia, and Vision loss, with the other two being Smoking and Eye droop (ptosis). The visual variables were retained as the visual domain has received relatively less attention in the ACVS literature compared to the motor and speech domain.42 Curtain represents a patient’s phenomenological (and indeed self-reported as such) experience of a “curtain” or “shade” descending over his or her field of view.43 This symptom is characteristic of transient monocular blindness (Amaurosis fugax). Vision loss represents the condition of homonymous hemianopsia, a symptom of posterior circulation strokes. Diplopia, or double vision, is another symptom of posterior circulation stroke. Smoking is known risk factor for stroke. Eye droop (ptosis) was retained due to its relation to Bell’s Palsy. Up to 75% of patients with Bell’s Palsy believe they have suffered a stroke,44 making Eye droop (ptosis) an important variable to include in the model.
To summarize, of the 123 candidate variables examined, a total of 50 variables were selected for inclusion in the final logistic regression model. Of these variables, 44 had a significant bivariate association (p < 0.05) with the diagnosis, and 6 were not significantly associated with the diagnosis but either were, (a) included to complete the numbness and weakness variable sets (N = 2), or (b) included on the basis of clinical relevance (N = 4).
In the logistic regression model fitting, twelve clinically relevant interaction terms based upon the selected predictors were included in the model. Table 2 displays the full clinical model and supporting references in the literature for the predictors in the model. Table 3 displays the extreme phenotype variant of the model.
Full Clinical logistic regression model (Age, Systolic BP and Diastolic BP standardized). B=coefficient estimate, OR=odds ratio, CI=confidence interval.
Extreme Phenotype logistic regression model (Age, Systolic BP and Diastolic BP standardized). B=coefficient estimate, OR=odds ratio, CI=confidence interval.
Performance Evaluation
The full clinical model was fitted to the training data (N = 4187) and cutpoints for the model were determined. The predicted probability corresponding to each type of cutpoint were: (a) maximum efficiency: 0.516; (b) maximum Kappa: 0.59; and (c) ROC01: 0.662. For the ABCD2 score a cutpoint of ≥ 4 was used as this value has been suggested by a number of national stroke guidelines.45,46 These cutpoints were used to evaluate performance of the models on the test dataset.
The extreme phenotype variant for the full clinical model was fitted to the training data, less “Possible” diagnoses, (N = 3722) and cutpoints determined. The predicted probability corresponding to each type of cutpoint were: (a) maximum efficiency: 0.499; (b) maximum Kappa: 0.567; and (c) ROC01: 0.599.
Table 4 displays the performance measures of the full clinical model, extreme phenotype variant, and ABCD2 scores on both the training (re-substitution performance on training set) and test datasets, less patients with missing ABCD2 scores (N = 3962 and N = 1953, respectively).
Performance measures (95% confidence interval) of models on training and test datasets.
The scaled Brier score (0 = uninformative model, 1 = perfect prediction)47 for the full clinical model was 0.254. DeLong’s test of ROC curves48 indicated a significant difference between the full clinical model and ABCD2 score on both the training and test datasets, p < 0.001, respectively.
On the test dataset the scaled Brier score was 0.226. However, measures of the calibration line49–51 suggest that the full clinical model is well specified (βlinear = 1.047), but ‘calibration-in-the-large’50 indicates that predicted probabilities are systematically too high (β0 = −0.257). In other words, the model does a good job predicting observed probabilities across a broad range of predicted probabilities, though it consistently overestimates the probability of ACVS (Figure 2, solid curve below the line of equality). Figure 2 displays validation plots47 (graphical representation displaying both model discrimination and calibration) for the full clinical model on the training and test datasets. The distribution of predicted outcomes along the x-axis depicts the discriminate performance of the model to differentiate ACVS from mimic cases.52
Validation plots of full clinical model on (A) training data, 2008–2011 (N = 3962), and (B) test data, 2012–2013 (N = 1953). RMSD=root mean standard deviation from line of equality.
Figure 3 displays decision curve analysis (DCA) plots for the full clinical model and ABCD2 score on the training and test datasets (ABCD2 scores were transformed to probabilities by way of logistic regression). On the test dataset the full clinical model achieved a net benefit of 0.214 at the empirical cutpoint determined on the training set, while the ABCD2 score achieved a net benefit of 0.118 at the empirical cutpoint of ≥ 4.
Decision curve analysis plots of full clinical model and ABCD2 scores on (A) training data, 2008–2011 (N = 3962), and (B) test data, 2012–2013 (N = 1953). Parentheses (cutpoint, net benefit).
Final Model Fits
Tables S2 and S3 in the Supplement display the full clinical model and extreme phenotype variant model fit to the combined training and test datasets, respectively.
Discussion
The goal of this study was to derive from patients’ reported event histories a clinically-informed model that can be used to optimally differentiate between ACVS and mimic conditions. Our full clinical model has demonstrated better predictive performance than the ABCD2 score. This increase in performance can be directly attributed to the extensive number of variables included in the full clinical model. The validation plots of the full clinical model on both the training and test sets suggest that the model is well specified and encompasses the major predictor variables of both ACVS and mimic conditions. Moreover, the AUC of the model on the test dataset (79.9%) is in keeping with theoretical upper limit for the AUC of a perfectly calibrated model.53 We interpret these results as evidence that the goals of the current study have been achieved.
The analysis of net benefit on the test dataset suggests that when performance is contextualized to a TIA clinic referral population (59.7% ACVS), our full clinical model achieves an increase in net benefit of 9.7% over the ABCD2 score. In the context of clinical practice (e.g., referral triage) the net benefit analysis suggests that our proposed clinical model might have greater clinical utility than the ABCD2 score as it currently used with a cutpoint of ≥ 4. Prior research has shown that a majority of specialized TIA clinics triage patient referrals by ABCD2 scores.14 As our model was developed on an equivalent population of patients it is likely that our model could be of value in improving upon existing TIA clinic triage practices.
A limitation of the current study is that the clinical practicality of the full clinical model is not immediately apparent. Given the goals of the study, the model is necessarily verbose with 50 main effects and 12 interaction terms. Moreover, logistic regression models are computationally complex and require the use of digital platforms to be usable as clinical prediction rules. In contrast, most clinical prediction rules tend to be terse and simple to complete, such as the ABCD2 score. Future work will need to determine if (a) such a large model can be made tractable for use by clinicians in real-world settings; or, conversely, (b) if a streamlined “bedside model” can be achieved by reducing the number of predictors and interactions in the model—in essence, simplifying the form of the model toward an additive point system like the ABCD2 score.
In conclusion, the results of the current study are encouraging in that they can be interpreted to suggest that a high degree of predictive ability to differentiate ACVS from mimic conditions can be attained on the basis of presenting clinical symptoms.
Footnotes
Maximilian.Bibok{at}viha.ca; ampenn{at}mac.com; mlespera{at}uvic.ca; Kristine.Votova{at}viha.ca; Robert.Balshaw{at}bccdc.ca
Research conducted within the Dept. of Research and Capacity Building, Vancouver Island Health Authority (Island Health).
Research presented (poster) at the Canadian Association of Emergency Physicians (CAEP), 2015 annual conference (Derivation of a clinical decision rule for acute cerebrovascular syndrome (ACVS) diagnosis in the emergency department).
Research contract funding provided by the Heart and Stroke Foundation, Genome British Columbia, and Genome Canada.