A |
|
|
* 10% Condition |
sample sizes should be no more than 10% of the population. |
10% Condition |
* Absolute standard deviation |
the distance between each value in the data set and that data set’s mean or median |
Standard Deviation |
* Adjusted R2 (Adjusted R-Squared) |
|
Adjusted R2 / Adjusted R-Squared |
* Admissible Decision Rule |
An admissible decision rule is a rule for making a statistical decision |
Admissible Decision Rule |
* α Alpha Level (Significance Level) |
the probability of making the wrong decision when the null hypothesis is true. |
α Alpha Level |
* Alternate Hypothesis |
an alternative to the null |
Alternate Hypothesis |
* ANOVA |
|
ANOVA |
* Arithmetic Mean |
|
|
* ARMA model |
|
|
* Asch Paradigm. |
|
|
* Assumption of Independence |
|
|
* Assumption of Normality |
|
|
* Average Deviation |
|
|
* B |
|
|
* Bell’s Numbers |
|
|
* Berkson’s Paradox |
|
|
* Bessel’s Correction. |
|
|
* Beta Level |
|
|
* Bias |
|
|
* Bin |
|
|
* Binomial Coefficient |
|
|
* Binomial Distribution |
|
|
* Bivariate Analysis |
|
|
* Bogardus Scale |
|
|
* Box and Whiskers Chart |
|
|
* Bray Curtis Dissimilarity. |
|
|
* Business Statistics |
|
|
* C |
|
|
* Cardinal Numbers |
|
|
* Causation |
|
|
* Cauchy-Schwarz Inequality |
|
|
* Ceiling Effect |
|
|
* Censoring |
|
|
* Central Tendency |
|
|
* Chauvenet’s Criterion |
|
|
* Chebyshev’s Inequality |
|
|
* Chi-Square Statistic |
|
|
* Classical Probability |
|
|
* Classical Test Theory |
|
|
* Closed form solution |
|
|
* Cluster Sampling |
|
|
* Clustered Standard Errors |
|
|
* Clustering |
|
|
* Cochran’s Q |
|
|
* Coefficients. |
|
|
* Coefficient of Association. |
measures the strength of a relationship |
Coefficient of Association |
* Coefficient of Determination |
|
|
* Cohen’s Kappa Statistic |
|
|
* Cohort Study |
|
|
* Collinear |
|
|
* Combined Mean |
|
|
* Concordant and Discordant Pairs |
|
|
* Conditional Distribution |
|
|
* Conditional Probability |
|
|
* Condition Indices |
|
|
* Confidence Level |
|
|
* Conservative |
|
|
* Consistent Estimator |
|
|
* Contrast. |
|
|
* Construct Validity |
|
|
* Content Validity |
|
|
* Contingency Table |
|
|
* Continuity Correction Factor |
|
|
* Contour Plot |
|
|
* Correlation Coefficient Formula |
|
|
* Correlogram |
|
|
* Covariance in Statistics |
|
|
* Cramer-Rao Lower Bound |
|
|
* Cronbach’s Alpha |
|
|
* Cross Covariance |
|
|
* Cross Validation |
|
|
* Cumulant Generating Function |
|
|
* Curvilinear |
|
|
* D |
|
|
* Data Distribution |
|
|
* Data Mining / Data Sets |
|
|
* Decile |
|
|
* Delphi Method |
|
|
* The Delta Method |
|
|
* Density Curve |
|
|
* Design Effect |
|
|
* Deterministic |
|
|
* Difference in Differences |
|
|
* Dimensionality |
|
|
* Direction of Association |
|
|
* Discrete Choice Models |
|
|
* Discretization |
|
|
* Dispersion |
|
|
* Dot Plot |
|
|
* Durbin Watson Test |
|
|
* E |
|
|
* Effect Size |
|
|
* Empirical Research |
|
|
* Empirical Rule |
|
|
* Equivalence Class |
|
|
* Error Bar |
|
|
* Error Term |
|
|
* Estimator |
|
|
* ETA squared |
|
|
* Euler’s Number. |
|
|
* Expected Value Formula |
|
|
* Expected Monetary Value |
|
|
* Experimental Design |
|
|
* Explained Variance |
|
|
* External Validity |
|
|
* Extrapolation |
|
|
* F |
|
|
* Factor Analysis |
|
|
* Factorial! |
|
|
* F Statistic |
|
F-Test |
* False Alarm Ratio |
|
|
* False Positive and Negative |
|
|
* Finite Sets / Infinite Sets and Statistics |
|
|
* Fisher’s Exact Test of Independence |
|
|
* Fisher Information |
|
|
* Fisher Z |
|
|
* Five Number Summary |
|
|
* Fixed Effects |
|
|
* Floor Effect |
|
|
* Fractile |
|
|
* Free parameter |
|
|
* Friedman’s Test |
|
|
* Frequency Distribution Table |
|
|
* Frequency Domain |
|
|
* Frequency Polygon |
|
|
* Frequentist Statistics |
|
|
* Fudicial Inference |
|
|
* G |
|
|
* Gamma Function |
|
|
* Gauss Markov Theorem and Assumptions. |
|
|
* Geodesy |
|
|
* Geometric Distribution |
|
|
* Geometric Mean |
|
|
* Gini Coefficient |
|
|
* Goldfeld Quandt Test |
|
|
* Gold Standard Test |
|
|
* Goodness of Fit Test |
|
|
* Graph Theory |
|
|
* Greatest Possible Error |
|
|
* Greedy Matching Algorithm |
|
|
* Guttman’s Lambda-2 |
|
|
* H |
|
|
* Harmonic Mean |
|
|
* Heterogeneous |
|
|
* Heteroscedasticity |
|
|
* Hidden Markov Model |
|
|
* Hierarchical Linear Modeling |
|
|
* Hierarchical Model. |
|
|
* Homogeneity |
|
|
* Hotelling’s T-Squared. |
|
|
* Hypergeometric Distribution |
|
|
* I |
|
|
* IID |
|
|
* Ill posed problem |
|
|
* Ill-conditioning. |
|
|
* Illusory association. |
|
|
* Imaginary Numbers |
|
|
* Implicit Factors |
|
|
* Implicitization |
|
|
* Index Number |
|
|
* Inductive Statistics |
|
|
* Inferential Statistics |
|
|
* Infinitely Divisible |
|
|
* Integer |
|
|
* Internal Consistency |
|
|
* Internal Validity |
|
|
* Interquartile Range |
|
|
* Interval Estimate |
|
|
* Interval Scale |
|
|
* Inverse Distribution Function |
|
|
* Inverse Normal |
|
|
* Ipsative |
|
|
* Item Response Theory |
|
|
* J |
|
|
* Jaccard Index |
|
|
* Jarque-Bera Test |
|
|
* Jeffreys Prior |
|
|
* K |
|
|
* Kelly’s Measure of Skewness |
|
|
* KL Divergence. |
|
|
* Kruskal Wallis Test |
|
|
* Kuder-Richardson |
|
|
* Kurtosis |
|
|
* L |
|
|
* L-Estimator |
|
|
* Large Enough Sample Condition |
|
|
* Latent Semantic Analysis |
|
|
* Law of Large Numbers |
|
|
* Least Squares Regression Line |
|
|
* Levene Test |
|
|
* Likelihood Function. |
|
|
* Likelihood Ratio |
|
|
* Likert Scale |
|
|
* Limits of Agreement |
|
|
* Line Graph |
|
|
* Linear Discriminant Analysis |
|
|
* Location Parameter |
|
|
* Logarithms |
|
|
* Log-Rank test |
|
|
* Lowess Smoothing |
|
|
* M |
|
|
* Mann Whitney U Test |
|
|
* MAPE |
|
|
* Marginal Effects |
|
|
* Marginal Mean |
|
|
* Market Basket Analysis |
|
|
* Martingale |
|
|
* Matched Samples |
|
|
* Mathematical Statistics |
|
|
* McNemar Test |
|
|
* Mean |
|
|
* Mean Error |
|
|
* Mean Squared Error |
|
|
* Median |
|
|
* Median Absolute Deviation |
|
|
* Method of Moments |
|
|
* Middle Fifty |
|
|
* Midpoint. |
|
|
* Midrange |
|
|
* Minimal Detectable Difference |
|
|
* Minimum Description Length. |
|
|
* Minimum Spanning Tree |
|
|
* Mode |
|
|
* Monotone Likelihood Ratio |
|
|
* Monotonic Relationship |
|
|
* Monty Hall Problem |
|
|
* Monte Carlo Simulation |
|
|
* Morisita Index |
|
|
* Moving Average |
|
|
* Moment |
|
|
* Moment Generating Function |
|
|
* Multicollinearity |
|
|
* Multinomial Coefficient |
|
|
* Multinomial Theorem |
|
|
* Multiple Imputation |
|
|
* Multiset |
|
|
* N |
|
|
* Natural Number |
|
|
* Nearest Neighbor Matching. |
|
|
* Negative Binomial Experiment |
|
|
* Nested Model |
|
|
* Nominal Ordinal Interval Ratio |
|
|
* NonLinearity |
|
|
* Non Negative Integer |
|
|
* Non-Parametric Data and Tests |
|
|
* Non Centrality Parameter |
|
|
* Normal Probability Plot |
|
|
* Null Hypothesis |
|
|
* O |
|
|
* Observation in Statistics. |
|
|
* Order Statistics |
|
|
* Ordinal Numbers and Ordinal Data |
|
|
* Orthogonality |
|
|
* P |
|
|
* P-Value |
|
|
* Paired Data. |
|
|
* Parameter |
|
|
* Parameter Estimation |
|
|
* Parametric Modeling |
|
|
* Parametric Tests |
|
|
* Parameterization. |
|
|
* Pareto Efficiency. |
|
|
* Pareto Principle |
|
|
* Park Test |
|
|
* Pascal’s Triangle |
|
|
* Pearson Correlation |
|
|
* Pearson Mode Skewness |
|
|
* Pearson’s Coefficient of Skewness |
|
|
* Percent Error |
|
|
* Percentiles |
|
|
* Permuted Block Randomization |
|
|
* Point Estimate |
|
|
* Population |
|
|
* Population Mean |
|
|
* Population Density |
|
|
* Population Proportion |
|
|
* Population variance |
|
|
* Post-hoc |
|
|
* Power Law |
|
|
* Power Mean |
|
|
* Practical Significance |
|
|
* Predictive Analytics |
|
|
* Prevalence |
|
|
* Prime Numbers |
|
|
* Principal Component Analysis |
|
|
* Probabilistic Models |
|
|
* Process Capability Analysis. |
|
|
* Probability Density Function |
|
|
* Probability Distribution |
|
|
* Probability Distribution Table |
|
|
* Probability Generating Function |
|
|
* Propensity Score Matching |
|
|
* Proportion of Variance |
|
|
* Proportional reduction in error (PRE Test) |
|
|
* Pygmalion Effect |
|
|
* Purposive Sampling |
|
|
* Q |
|
|
* Quantile |
|
|
* Quartiles |
|
|
* Quasi-Statistical |
|
|
* Quintile |
|
|
* Q-Value |
|
|
* R |
|
|
* Random Seed |
|
|
* Range |
|
|
* Random Walk |
|
|
* Rank Biserial |
|
|
* Rank Histogram |
|
|
* Rao Blackwell Theorem |
|
|
* Rate Parameter |
|
|
* Rate Ratio |
|
|
* Ratios and Rates |
|
|
* Ratio Estimator |
|
|
* Ratio Scale |
|
|
* Real Numbers |
|
|
* Receiver Operating Characteristic Curve |
|
|
* Regression Equation |
|
|
* Relative Absolute Error |
|
|
* Relative Dispersion |
|
|
* Relative Error |
|
|
* Relative Precision |
|
|
* RFM (Customer Value) |
|
|
* Relative Standard Deviation |
|
|
* Relative Variance |
|
|
* Reliability |
|
|
* Representative Sample |
|
|
* Rescaling Data |
|
|
* Residual values |
|
|
* Residual Sum of Squares |
|
|
* Resistant Statistics |
|
|
* Rician Distribution |
|
|
* Risk Function |
|
|
* Robust Statistics |
|
|
* S |
|
|
* Sample Mean |
|
|
* Sample Statistic |
|
|
* Sampling Frame |
|
|
* Scale Factor |
|
|
* Scale Invariance |
|
|
* Scale Parameter |
|
|
* Seasonal Kendall Test |
|
|
* Segmented Bar Chart |
|
|
* Semantic Differential Scale |
|
|
* Semistructured Interview |
|
|
* Sensitivity/Specificity |
|
|
* Sensitivity Analysis |
|
|
* Serial Correlation |
|
|
* Sequence Effects |
|
|
* Shape Parameter |
|
|
* Shapiro-Wilk Test |
|
|
* Shared Variance |
|
|
* Shifting Data |
|
|
* Simple Effects (see: Main Effects) |
|
|
* Simple linear regression |
|
|
* Simple Random Sample |
|
|
* Simpson’s Diversity Index |
|
|
* Simpson’s Paradox |
|
|
* Slope |
|
|
* Slope Stability Analysis |
|
|
* Slutsky’s Theorem |
|
|
* Snowball Sampling |
|
|
* Spearman-Brown Formula |
|
|
* squared deviations |
|
squared deviations |
* Standard Deviation |
|
|
* Standard Error of Measurement |
|
|
* Standard Error of a Sample |
|
|
* Standardized Residuals |
|
|
* Standardized Variables |
|
|
* Stanine Score |
|
|
* Stationarity |
|
|
* Statistic |
|
|
* Statistical Analysis |
|
|
* Statistical Conclusion Validity |
|
|
* Statistical Noise. |
|
|
* Statistical Power |
|
|
* Statistical Process Control |
|
|
* Statistical Relationship |
|
|
* Statistical Significance |
|
|
* Statistical Stability |
|
|
* Statistical Treatment |
|
|
* STEN Score |
|
|
* Stratum |
|
|
* Stratification |
|
|
* Stress Strength Model |
|
|
* Student’s T-Test |
|
|
* Summary Statistics |
|
|
* Summation Notation |
|
|
* Survival Analysis |
|
|
* T |
|
|
* T Score |
|
|
* T Statistic |
|
|
* Test of Association |
|
|
* Test-Retest Reliability |
|
|
* Test Statistic |
|
|
* Tikhonov Regularization |
|
|
* Thurstone Scale |
|
|
* Tolerance Level |
|
|
* Transformations |
|
|
* Treatment-As-Usual |
|
|
* Triangulation |
|
|
* Trimean |
|
|
* Trimmed Mean / Truncated Mean |
|
|
* Turning Point Test |
|
|
* Tweedie Distribution |
|
|
* Two Way Table |
|
|
* Type I and Type II Errors |
|
|
* Type III and Type IV Errors |
|
|
* U |
|
|
* Unbiased |
|
|
* Uncertainty |
|
|
* Uncertainty Coefficient |
|
|
* Undercoverage |
|
|
* Unidimensionality |
|
|
* Univariate Analysis |
|
|
* U Statistic. |
|
|
* Upper tail and lower tail. |
|
|
* V |
|
|
* Variable |
|
|
* Variability |
|
|
* Variance |
|
Variance |
* Variance Inflation Factor |
|
|
* Variance Sum Law |
|
|
* Variate. |
|
|
* Varimax Rotation. |
|
|
* Volatility. |
|
|
* V Statistic |
|
|
* W |
|
|
* Weighting Factor |
|
|
* White Test |
|
|
* Whole Number |
|
|
* Wilcoxon Signed Rank Sum Test |
|
|
* Within Mean Square |
|
|
* Weighted Mean |
|
|
* W Statistic |
|
|
* Y/Z |
|
|
* Yates Correction |
|
|
* Z Test |
|
Z-Test |
* Z-Score |
|