Activities (RBM stage)
second step involves the process that converts inputs to outputs (actions necessary to produces results - training, evaluating,
... [Show More] developing)
Alternative hypothesis
The argument that either a sample is not equal to, greater than, or less than the hypothesized null sample
Analysis of Variance (ANOVA)
a technique used to determine if there is a sufficient evidence from sample data of three or more populations to conclude that the means of the population are not all equal
Analytics
The discovery, analysis, and communication of meaningful patterns in data.
Autocorrelation
A relationship between two variables that is inherently non-linear
Balanced Scorecard
An approach using multiple measures to evaluate performance, including financial measures, and the non-financial measures of customers, internal business processes, and learning and growth.
Bar chart
A graph that measures the distribution of data over discrete groups or categories.
Benchmarks
Standards or points of reference for an industry or sector that can be used for comparison and evaluation.
Big Data
very large amounts of data; an all-encompassing term for any collection of data sets so large and complex that it becomes difficult to process them using traditional data processing applications
Blind Study
A study performed where the participants are not told if they are in the treatment group or control group
body mass index (BMI)
A measure, based on a person's weight and height, that is used to classify people as underweight or overweight.
Business process
A sequence of logically related and time based work activities to provide a specific output for a customer.
Central Limit Theorem
A theorem that states that, the greater the sample, the closer the mean of the sample is to the entire population and the more the results will look like a normal distribution
Cluster Analysis
The process of arranging terms or values based on different variables into "natural" groups
Cointegration
Occurs when two time series are moving with a common pattern due to a connection between the two time series
Combination
The number of different unordered possibilities for a certain situation.
Complement
The occurrence of an event not happening, the opposite
Confidence interval
An interval estimate used to indicate reliability
Continuous Data
Data that can lay along any point in a range of data
Control chart
A graphic display of process data over time and against established control limits, and that has a centerline that assists in detecting a trend of plotted values toward either control limit.
Control limits
The area composed of three standard deviations on either side of the centerline, or mean, of a normal distribution of data plotted on a control chart that reflects the expected variation in the data
Criterion-reference test
compare an individual to certain defined standards
Critical Success Factors
The important things an entity must do to be successful, such as quality measures, customer service, or efficiency.
Cumulative Average-Time Learning Model
A learning curve model in which the cumulative average time per unit declines by a constant percentage each time the cumulative quantity of units produced is doubled
Cumulative distributions
The probability that a random variable will be found at a value less than or equal to a given number
Customer satisfaction
A measure of the extent to which customers are satisfied with the products and related services they received from a supplier.
Cycle time
The total elapsed time to move a unit of work from the beginning to the end of a physical process, as defined by the producer and the customer.
Cyclicality
Repetition of up (peaks) or down movements (troughs) that follow or counteract a business cycle that can last several years
Data Management
The management, including cleaning and storage, of collected data.
Data Mining
the process of discovering patterns in large data sets; performed on big data to decipher patterns from these large databases
Data Set
A collection of related data records on a storage device.
Davenport Kim Three Stage Model
A decision making model developed by Thomas Davenport and Jinho Kim that consists of three stages: framing the problem, solving the problem, and communicating results
Dependent Variable
The variable whose value depends on one or more variables in the equation; typically the cost or activity to be predicted
Detractor
A category of customer used in the calculation of the Net Promoter Score that indicates an unhappy customer.
Discrete Data
Data that can only take on whole values and has clear boundaries
Double Blind Study
A study performed where neither the treatment allocator nor the participant knows which group the participant is in
Economic Value Added (EVA)
Net income (after taxes) earned in excess of the amount of net income required to earn the company's cost of capital.
Epidemiology
study of the incidence, distribution and possible control of diseases and other factors relating to health
Event
An outcome that occurs
Experience Curve
A curve that shows the decline in cost per unit in various business functions of the value chain as the amount of these activities increases
Heteroscedasticity
A regression in which the variances in y for the values of x are not equal
Histogram
A graph that displays continuous data. This type of graph has vertical bars that show the counts or numbers in each range of data.
Homoscedasticity
A regression in which the variances in y for the values of x are equal or close to equal
Hypothesis
A proposed explanation used as a starting point for future examination
Impact (RBM stage)
last step when applying results-based management is to study the long-term effects that the output will have (economic, environmental, cultural, or political change)
Incidence
measures the number of new cases that arise in a population over the course of a designated time period
Incremental Unit-Time Learning Model
A learning curve model in which the incremental unit time (the time needed to produce the last unit) declines by a constant percentage each time the cumulative quantity of units produced is doubled
Independent Variable
The variable presumed to influence another variable (dependent variable); typically it is the level of activity or cost driver
Information Bias
A prejudice in the data that results when either the respondent or the interviewer has an agenda and is not presenting impartial questions or responding with truly honest responses, respectively
Input (RBM stage)
the first step of RBM is to define the resources, human or financial, used by the RBM system (people, funds, information)
Interquartile range
The difference, in value, between the bottom and top 25 percent of the sample or population
Interval Data
Data that is ordered within a range and with each data point being an equal interval apart
Irregularity
One-time deviations from expectations caused by unforeseen circumstances such as war, natural disasters, poor weather, labor strikes, single-occurrence company- specific surprises or macroeconomic shocks
Item Response Theory (IRT)
model of designing, analyzing and scoring tests
Key Performance Indicator (KPI)
A performance measurement that organizations use to quantify their level of success.
Laspeyres Index
a comparison of the same quantity of goods with the same weight over a period of time
Line graph
A graph that illustrates relationships between two changing variables with a line or curve that connects a series of successive data points
Lower limit control
The minimum value on a control chart that a process should not exceed
Mean
An average, calculated by adding a series of elements in a data set together and dividing by the total number in the series
Measurement Bias
A prejudice in the data that results when the sample is not representative of the population being tested
Median
The value or quantity lying at the midpoint of a frequency distribution
Multicollinearity
A multiple regression equation is flawed because two variables thought to be independent are actually correlated to be independent
Multiple Linear Regression
A statistical method used to model the relationship between one dependent (or response) variable and two or more independent (or explanatory) variables by fitting a linear equation to observed data
Multiplication Principle
When the probabilities of multiple events are multiplied together to determine the likelihood of all of those events occurring
Mutually exclusive events
When two or more events are not able to occur at the same time
Net Promoter Score
A management tool designed to collect data indicating the relative loyalty of customers and their willingness to recommend a company's products or services.
Nominal Data
Sometimes called categorical data or qualitative data, this data type is used to label subjects or data by name
Non parametric test
A test that does not assume there to be a structure (may be a normal distribution) to the population.
Norm-referenced test
compare an individual to other individuals
Normal distribution
data tending to occur around a central value with no bias right or left
Null hypothesis
The argument that there is no difference between two samples or that a sample has not changed over time
Omission Error
An error because something (for example, data or survey response) is missing.
Operating Income
Earnings before Interest and Taxes.
Ordinal Data
Data that places data objects into an order according to some quality with higher order indicating more of that quality
Outcome (RBM stage)
the short-term effect that the outputs will have (greater efficiency, more viability, better decision making, social action, or changed public opinion)
Outlier
An observation point that is significantly distant from the other observations in the dataset
Output (RBM stage)
third step when the outputs have been created by the RBM activities (goods and services, publications, systems, evaluations, skills changes)
Paasche Index
calculates the difference over time between the weighted totals of the qualities purchased at each time
Parametric test
A test that assumes there is a structure (maybe a normal distribution) to the population, often appearing when mean or standard deviation are important.
Passive
A category of customer used in the calculation of the Net Promoter Score that indicates an enthusiastic and satisfied but apathetic customer.
Percentile
the percent of the population that falls below a certain value
Permutation
The number of different ordered possibilities for a certain situation
Prevalence
measures the number cases of a particular disease that exist in a population
Probability
The chance of an event occurring
Probability density function
Often used to represent probabilities of continuous data, a probability density function (pdf) gives the probability that a continuous random variable is equal to the area below it
Probability distributions
A set of probabilities that are attached to the different possible outcomes in a survey, experiment, or procedure
Probability mass function
Often used to represent probabilities of discrete data, a probability mass function (pmf) gives the probability that a discrete random variable is exactly equal to some value
Promoter
A category of customer used in the calculation of the Net Promoter Score that indicates a loyal and enthusiastic customer.
Proportion
a ratio in which a part of a group is compared to the whole group
R-squared
The measure of the "goodness of fit" of the regression line and the percentage of variation in the dependent variable that is explained by the independent variable
Random Errors
Errors in measurement caused by unpredictable statistical fluctuations
Random Variation
The variability of a process which might be caused by irregular fluctuations due to chance that cannot be anticipated, detected, or eliminated
Range
The difference between the minimum and maximum value in a given measurable set
Rate
measure of an event occurring over a period of time
Ratio
measures one quantity in relation to another quantity
Ratio Data
Similar to interval data in that the data that is ordered within a range and with each data point being an equal interval apart, also has a natural zero point which indicates none of the given quality.
Regression Analysis
A statistical analysis tool that quantifies the relationship between a dependent variable and one or more independent variables
Relational Database
A database structured to recognize relations among stored items of information.
Reliable Data
Data that is consistent and repeatable
Results-base Management (RBM)
a management strategy that uses results as the central measurement of performance
Return on Investment (ROI)
The ratio of income earned on the investment to the investment made to earn that income.
Run chart
A line chart that shows performance measurements over time; run charts help to uncover trends or aberrations in processes
Sampling with replacement
When a piece of the population can be selected more than once
Sampling without replacement
When a piece of the population cannot be selected more than once
Scatter diagram
A graphic that uses dots to show relationships or correlations between variables
Significance level
A number that is used as the cutoff for how statistically meaningful a probability, equal to or more extreme than what was observed, is
Simple Composite Index
created when a researcher gathers data from many different sources without weighing any data more significantly than any other data
Simple Index Number
shows the change in price or quantity of a single good or service over time
Simple Linear Regression
A form of regression analysis with only one independent variable
Specification limits
The area, on either side of the centerline, or mean, of data plotted on a control chart that meets the customer's requirements for a product or service. This area may be greater than or less than the area defined by the control limits
Standard deviation
The square root of the variance, a measure of how spread out the numbers are
Standard Error (SE) of Estimate
The "average" deviation of the data points from the regression line or curve
Standard score
Also Z-scores, measure the distance from a piece of data from the mean compared to the entire population; method to compare two data sets together with different scales.
Statistics
The science that deals with the interpretation of numerical facts or data through theories of probability. Also, the numerical facts or data themselves.
Systematic Errors
Errors in measurement that are constant within a data set, sometimes caused by faulty equipment or bias
test statistic
One value used to test the hypothesis, it is a numerical summary of the data set
The Result Chain
1) Resources - inputs and activities 2) Results - outputs then outcomes then impact
Time Series Analysis
Regression analysis that uses time as the independent variable
Trend
In data analysis, a general slope upward or downward over a long period of time
Trial
An experiment, a test of the performance or qualities of something or someone
Triple Blind Study
A study performed where neither the treatment allocator nor the participant nor the response gatherer knows which group the participant is in
True Score Model
average score an individual would achieve if he or she were to take the test infinite times; observed score is the true score plus random error
Upper control limit
The maximum value on a control chart that a process should not exceed
Valid Data
Data resulting from a test that accurately measures what it is intended to measure
Variance
The average of the squared differences from the mean of the related sample
Weighted Composite Index
created when a researcher applies more weight to certain goods or services than others as they are calculating the index number
Z-score
A statistical measure that indicates the number of standard deviations a data point is from its mean
Pareto chart
A bar chart that sorts data into categories, then prioritizes those categories to help project teams identify the most significant factors or the biggest contributors to problems.
80/20 rule
states that 80% of quality management problems are the result of a small number - about 20% - of causes
expected value
a random variable is intuitively the long-run average value of repetitions of the experiment it represents
cohort study
A study that observes and follows people moving forward in time from the time of their entry into the study.
linear programming
A mathematical tool used to optimize a function (the objective function) subject to various constraints, all of which are linear. Often used to find the combination of products that will maximize profits or minimize costs.
correlation
The extent or degree of statistical association among two or more variables.
response bias
This misuse occurs when the respondents to a survey say what they believe the questioner wants to hear. This bias can occur as a result of the wording of a question.
Association and causality
This statistical misuse occurs when a researcher notices a relationship between two variables and assumes that one variable is the cause of the other. In reality, these variables might both be caused by a separate variable. In this case, they would merely be correlated, which means they show up together. Or there might be no relationship at all.
operationalization
refers to the development of specific research procedures that allow for observation and measurement of abstract concepts
conscious bias
occurs when the surveyor is actively seeking a certain response to support his or her theory or cause
Bayes' Theorem
A formula that calculates conditional probabilities, important in understanding how new information affects the probabilities of different outcomes.
conditional probability
the probability of an event occurring given that another event has occurred
Chi-square test
any statistical hypothesis test in which the sampling distribution of the test statistic is a chi-square distribution when the null hypothesis is true
Plan Do Check Act Cycle
A four-step method that practitioners use to create plans to solve a problem (Plan), run an experiment to see if the plan will work (Do), check the experiment results (Check), and implement changes to processes or policies (Act)
SIPOC diagram
A diagram that defines the boundaries of a process and shows how its Suppliers, Inputs, Processes, Outputs, and Customers affect process quality.
Ishikawa - 7 Basic Tools of Quality
1) Run Chart
2) Check sheet
3) Cause and effect diagram (fishbone diagram)
4) Histogram
5) Flow Chart
6) Scatter Diagram
7) Pareto chart
Six Sigma
A highly disciplined, data-driven approach that uses statistical analysis to measure and improve a company's operational performance by identifying and eliminating defects in manufacturing and service processes; the term itself is commonly defined as 3.4 defects per million opportunities.
lean operations
Popularized by Six Sigma, business practices that use as little time, inventory, supplies and work as possible to create a dependable product or service. The less that is used, the less waste occurs, and the more money the business saves. Accuracy is also very important in POS (Point of Sale) systems, and the most accurate systems produce products and services without flaws, so nothing needs to be thrown away.
International Organization for Standardization (ISO)
Established a certification program that guarantees that an organization is dedicated to quality concepts and is continually working to ensure that it is producing highest level of quality possible. The certification shows that an organization has a quality management system in place to monitor and control quality issues and is continuing to meet the needs of customers and stakeholders with high-quality products and [Show Less]