Professional Documents
Culture Documents
Baldrige Scoring Guidelines and other Scoring System Guides and Tools
The scoring of responses to Criteria Items (Items) and Award applicant feedback are based on two evaluation dimensions: (1) Process and (2) Results. Criteria users need to
furnish information relating to these dimensions. Specific factors for these dimensions are described below. Links to all Scoring Guidelines versions are provided on this page.
Process Scoring
Results Scoring
Process refers to the methods your organization uses and improves to address the
Item requirements in Categories 16. The four factors used to evaluate process are
Approach, Deployment, Learning, and Integration (ADLI).
Trends refers to
- the rate of your performance improvements or the sustainability of good performance
(i.e., the slope of trend data)
- the breadth (i.e., the extent of deployment) of your performance results
Comparisons refers to
- your performance relative to appropriate comparisons, such as competitors or
organizations similar to yours
- your performance relative to benchmarks or industry leaders
Integration refers to the extent to which
- your results measures (often through segmentation) address important customer,
product, market, process, and action plan performance requirements identified in
your Organizational Profile and in Process Items
- your results include valid indicators of future performance
- your results are harmonized across processes and work units to support
organization-wide goals
pdfcrowd.com
Previous Versions
Business
Health Care
Education
Process
Results
Process
Results
Process
Results
2009 - 2010
2008
2007
2006
2005
2009 - 2010
2008
2007
2006
2005
2009 - 2010
2008
2007
2006
2005
2009 - 2010
2008
2007
2006
2005
2009 - 2010
2008
2007
2006
2005
2009 - 2010
2008
2007
2006
2005
pdfcrowd.com
To put this in perspective, not one of the 20 million for-profit businesses in the United States applied for the Baldrige Award this year.
Even worse, the number of applicants has declined 73% for Health Care and 80% for Education since 2010
Source: NIST Baldrige Website
pdfcrowd.com
The terminology "Multiple Requirements", "Overall Requirements" and "Basic Requirements" are confusing to most users and
contributes to assessment variation. Guidance that the "requirements" don't really mean "requirements" doesn't help either. Advice to take a
holistic view and not hold applicants accountable to the "requirements" . . . well, you get the picture.
Results are quantitative by nature. So, why use judgmental terms (e.g., important, poor, good, good relative, very good, good to
excellent, excellent, or my personal long-time favorite early good)? They are not needed. They introduce variation into the assessment. Get
rid of them.
How is "fully deployed without significant gaps" different from "fully deployed with significant gaps" . . . one of several examples where the
Scoring Guidelines can be improved through more careful wording selection, simplification, and word count reduction. "Sustained over time" is
another.
Improve the coherency of the Results Scoring Guidelines language including the use of 'few, little, little to no, limited, limited or
no, some, many, many to most, most, majority, fully, or my personal favorite mainly. Examples: Is majority closer to many or is it
closer to most? Is 'majority' a simple majority? Is 'mainly' more or less than 'majority'? Is 'majority' between 'many' and 'many to most' or
between 'many to most' and 'most? How does 'many' relate to 'mainly'? . . . this act needs to be cleaned up folks.
Why does the "accomplishment of Mission" verbiage switch from the Trend scoring dimension to the Integration Scoring dimension in the
middle scoring range?
Eliminate confounded terminology. For example, how should the terms important, high priority, and key be used in scoring results?
For example, which of them is most important? Which should be given the highest priority? Are they all key terms? This variation in
terminology is unnecessary, confusing, and contributes to scoring variation if not error.
The Results Scoring Guidelines reference customers directly but not other key stakeholders such as workforce, suppliers, and
community.
Baldrige Results Scoring Guidelines Quiz
Q: Which American document has approximately 150 more words than the other? a) the Baldrige Results Scoring Guidelines or b) Abraham Lincoln's "Gettysburg Address"?
A: The Results Scoring Guidelines have 400 words. The Gettysburg Address has 256.
Q: Which of these three terms are not used to assess results? a) 'early good', b) 'on time good', or c) 'late good'
A: If you thought this was a trick question, unfortunately you're wrong. 'Early good' is used in the 0 to 5% scoring range. Silly me. I thought getting good results early would have
scored higher.
Q: Results are quantitative by nature. However qualitative and/or judgmental guideline terms are used to assess them in all scoring bands. Is this: TRUE? or FALSE?
A: TRUE. The judgmental terms "poor", "good", "good relative", "very good", "good to excellent", "excellent", and my personal favorite "early good" are used to assess the
quantitative results. For 2011, the terms "good for nothing", "good enough", and "too good to be true" will be added . . . not true. Also not true is that because some people do
not understand what "early good" means, "on-time good", and "late good" will also be added.
Q: "World class" was once part of the Results Scoring Guidelines: TRUE? or FALSE?
pdfcrowd.com
A: TRUE. The early guidelines required winners to demonstrate 'world class' results to score in the highest scoring range.
Q: Which of the following terms are not used to assess the quantity of results? "no", "any", "few", "little", "little or no", "limited", "limited or no", "some", "some to many",
"mainly", "many", "many to most", "majority", "most to fully", or "fully"
A: Believe it or not, "some to many" and "most to fully" are not in the scoring guidelines. "Mainly"???
Q: Not one Examiner knows how to interpret the relative meaning of these results assessment terms: "important", "high priority", and "k ey". TRUE? or FALSE?
A: I don't know how many but I do know that there is at least one who has never been able to figure it out (LOL).
Baldrige improvement accelerated and made practical with tools and resources from the most experienced source
Criteria
made
Practical
Baldrige
Framework
Core Values
Scoring System
Feedback
Reports
All 44
Case Studies
Baldrige
Glossary
Tools:
Criteria
Templates
Winners
Applications
25th Baldrige
Anniversary
Special
Guides to
Improvement
Metrics Benchmarks
Scoring
Guidelines
Core
Competencies:
One Stop
Services
Training,
Workshops
Seminars,
Speaking,
Presenting
Assessment Including No
Writing Option
Consulting
Site Visit
Preparation
Application
Development
Special Focus:
Baldrige
Model
Education
Health Care
Public Sector
Nonprofit
Baldrige
International
Baldrige Store:
All Things
Baldrige
Guide to a
Well-Written
Application
Site Visit
Preparation
Guide
Guide to
Effectively
Presenting
Results
All Guides
Bundle
Best Process
Models and
Implementation
Information
Capture and
Organization
Essentials:
pdfcrowd.com
Download special MS Word versions of the 2012 Baldrige Criteria in a more practical format approved by more than 20 national award programs
See the hundreds of Criteria requirements that have been deleted from the previous versions
pdfcrowd.com