Está en la página 1de 7

Making the Case for Quality

A Uniform Approach to Inspection Standards


Reducing automotive maintenance costs at Fort Campbell, KY
Increased demand on
U.S. soldiers means
increased demand on their
equipment. Maintenance
management holds the
key to keeping costs
as low as possible.
A U.S. Army contractor
was tasked with identifying
the variance existing in
an equipment inspection
program, the cost
created, and clear up the
inspection standard.
Using DMAIC and
other quality tools, the
quality management
team reduced inspector
variance by 97 percent,
while increasing accuracy
by nearly 20 percent.
At a Glance
Equipment used by U.S. Army soldiers must be ready at all times. Army mechanics and U.S.
Department of Defense (DoD) contractors must maintain the Armys automotive equipment to ensure
it is mission capable when needed. The Army has a standard of maintenance for every vehicle in
its inventory. The standard left just enough wiggle room for each individual inspector to interpret the
requirements in a manner that fits their concept of maintenance. This caused great diversity in actual
maintenance standards, leading to cases of two or more inspectors looking at a single piece of equip-
ment, resulting in excessive parts and labor costs.
One quality management team at Fort Campbell, an Army base in Kentucky near the Tennessee bor-
der, sought to measure the cost of these varying interpretations and bring clarity to the application of
the Armys standard. Using customer comments from the soldiers, real-time inspection and repair data,
measurement system analysis (MSA), and design of experiment (DoE), the team identified the extent
of inspector variance. Once the current state was found, they used the DMAIC methodology of Lean
Six Sigma to reduce inspector variance, increase accuracy, and save money.
About Inspection by Design
Griffith Griff A. Watkins was the quality manager for a DoD contractor for five years augmenting Fort
Campbells automotive maintenance program by placing skilled technicians in unit motor pools across
the post. The primary mission for the contractors maintenance teams was the Armys Left Behind
Equipment (LBE) program. At the time, from 2007 to 2011, this program was designed to service
and repair all equipment that the Army left behind when deploying, ensuring that soldiers would have
mission-capable equipment to train with upon their return. The expected standard of maintenance was
Technical Manual (TM) 10 and TM 20 series standards. TM 10 series standards are those inspections
that are conducted by the operator or crew of the equipment, while TM 20 series are those inspections
that are conducted by the unit maintenance operation for the service and repair of the equipment.
The inspection program utilized maintenance leads as first-line inspectors to conduct initial, in-process,
and final inspections of the automotive equipmentall at the TM 10/20 standard. At the same time, a
smaller team of quality control (QC) inspectors surveyed random samples of the work completed by
the maintenance shops and evaluated the shops compliance to quality management system (QMS) and
Army standards and regulations. Ron Benson was the primary QC inspector who was instrumental in
measuring the inspectors current state and designing the future state.
by Griffith A. Watkins
June 2014
ASQ www.asq.org Page 1 of 7
a combination of the Pareto chart, database tools, multivoting,
and DoE to define and validate the problem. The current state
measurement, analysis, and future state development required
the use of the tools, as seen in Table 1 (above).
The project team was selected by interviewing the manage-
ment team and conducting a brainstorming session to create an
inspection program suppliers, inputs, process, outputs, custom-
ers (SIPOC) chart. The interviews and SIPOC both served to
identify team members who would have the greatest degree of
subject matter expertise while also having the authority to enact
change. The project team included:
Quality manager Griffith Watkins (Black Belt/Mentor)
QC inspector lead Ron Benson (Green Belt)
QC inspectors
Maintenance manager
Senior maintenance leads
Government quality assurance representative (QAR)(voice
of the customer)
Define
After identifying the problem, Watkins and Benson conducted
small-scale measurement system analysis (MSA) to validate
selection of the project. The MSA was completed by having
each of the three QC inspectors conduct a final QC inspection
on the same HMMWV. The vehicle deficiencies recorded by
ASQ www.asq.org Page 2 of 7
Why Quality
In 2010, the Fort Campbell maintenance facility experi-
enced a change in government oversight. The new leadership
was highly focused on production numbers and budget, so
Watkins and Benson needed to find a way to continue pro-
viding Fort Campbells soldiers with the same high-quality
service, but faster and with less cost. To do this, they needed
reliable data. For the most reliable data, they had to go right
to the source: maintenance records for the equipment repaired
at the facility.
From a population of more than 5,000 High Mobility
Multipurpose Wheeled Vehicles (HMMWVs) the QC inspection
team took a sample of 436 service and repair documents and
entered the maintenance information into a detailed database.
Items recorded were parts required, labor hours, parts cost, and
labor cost per repair task. For example:
Repair Task
Corrective Action/
Labor Time
Part Required,
Quantity/Part Cost
Axle shaft loose Replaced shaft/
2.2 hours
Axle shaft,
one each/$100
The data collected gave Watkins the answer to the customers
most frequent comment, We love what you guys do, but it costs
so much. In fact, the data revealed the following:
1. An inspection program that relied on quantity of inspections
rather than quality
2. An alarming rate of defective or redundant work
3. Extreme variance between mechanics for individual
maintenance task times
The data was shared with the senior management team, who
unanimously agreed that correcting the inspection program
should be the number-one priority project over addressing defec-
tive work and individual task times.
Buy-in for the project came from two directions. First, the
project had to receive the approval and support of the contrac-
tors senior management and staff. Then, the team needed the
support of the U.S. governments management team oversee-
ing their contract. Approval and support from the contractors
management was quickly won since the project fully supported
the companys stated quality objectives, which included on-time
performance at or below cost, and achieving the highest levels of
customer satisfaction.
A Contractors Quality Journey
Improvement of the inspection program began in June 2010 and
ran until June 2011. The primary approach used throughout this
project was the DMAIC methodology, consistent with the com-
panys procedure for LSS implementation. Project selection used
Table 1 Quality tools used during this phase
Tool/Method Why Anticipated Data
Measurement System
Analysis (MSA)
Attribute Agreement
Measure inspector
accuracy
The average accuracy of
the inspectors judgment
Interaction Plots Measure inspector
adherence to customer
specification
The average percentage of faults
identified outside of the published
Army inspection standard
Pareto Charts Measure/display
inspector variance
The number of deficiencies ordered
by the most common number
of inspectors identifying them
Scatter Plot Charts Measure/display
inspector variance
The number of deficiencies
identified and how many
inspectors identified them
Cause and Effect
Diagram
Root cause analysis Root causes and final root
causes for extreme variance
between inspectors
MS Access Database Production data from
historical work order
documentation
Parts and labor costs associated
with faults identified outside of the
published Army inspection standard
Historical Work
Orders
A sample of production
records large enough
to give a 90%
confidence level with
12% margin of error
Faults identified, inspection data,
parts, and labor hours not recorded
in the Armys information systems
Five Whys/
Brainstorming
Root cause analysis Root causes and final root
causes for extreme variance
between inspectors
ASQ www.asq.org Page 3 of 7
checklist. In the MSA, the inspectors were given the PMCS
checklist and told to conduct their inspection in the same man-
ner they normally would. Since most of the inspectors saw the
checklist as a guide rather than a standard, this provided Watkins
and Benson exactly what they needed to see the current state in
its entirety.
Again, the inspection results were identified as either within
standard or above standard. Only within standard faults
were considered when determining inspector accuracy. Above
standard faults could not be directly correlated with the inspec-
tion checklist, so they were removed as outliers. However, for
inspector variance, both categories were considered to gain a
precise understanding of how each inspector interpreted the stan-
dard and the cost associated with that variance. Of course, every
inspector believed their results to be well within the Army TM
10 and TM 20 standard, but that was not the case.
Thirty-three inspectors were assigned to survey three HMMWVs
(11 inspectors per vehicle). The scatterplot in Figure 1 showing
one set of results is fairly representative of all three vehicles, and
the level of variation present in the inspection team.
On average, a mere 43 percent of the faults found by any inspec-
tor were within the standard of inspection. An average of 57
percent of faults found were either above the standard or even
unverifiable (also classified as above standard). At the same
time, only five percent of faults were identified by all of the
inspectors, while 34 percent were only found by one inspector or
another (see Figure 2).
The natural question was, How did they produce such high-
quality goods?
The answer is that the process relied on putting as many eyes
on the product as possible. Rather than a one-time premier
initial inspection, they had counted on multiple sub-par inspec-
tions to ensure quality. The current state actually depended on
the variance between inspectors rather than requiring a single
each inspector were identified as either a.)within standard, or
b.)above standard. These categories were defined as follows:
Inspection Criteria: Inspect the rearview mirror for pres-
ence and serviceability.
Within standard: Those deficiencies identified by explicit
reference in the inspection standard. These deficiencies can
be clearly linked to the inspection criteria regardless of the
inspector. This is a bottom-up interpretation of the standard
(e.g., the mirror is missing).
Above standard: Those deficiencies identified by inferring
what the inspection criteria should include. These deficien-
cies cannot be clearly linked to the inspection criteria. They
rely on the interpretation of the individual inspector. This
is a top-down interpretation of the standard (e.g., the paint
on the mirror is faded). It must be noted that these may be
actual deficiencies, just not within the scope of maintenance
requested by the customer.
The result of the MSA revealed a great disparity between inspec-
tors. While a cumulative total of 57 deficiencies were identified,
none of the three inspectors had recorded the same items on their
inspections. In addition, 84 percent of the deficiencies recorded
were within standard, while 16 percent were above standard.
This MSA validated the definition of the problem.
Measure
The current state was measured in three areas:
1. Inspector variance and adherence to the inspection standard
2. Inspector accuracy regarding within standard repairs
3. The cost associated with above standard repairs
The definitions of within and above standard were criti-
cal since they both affected the cost passed on to the customer.
If going above the standard was at no cost to the customer, it
would be value added, but in this case the cost of going above
the requested standard was passed on to the customer. It would
be akin to taking a car to a shop for an oil change, but the
mechanic also installed four new tires. If the tires were included
in the price of the oil change, its value added. However, if the
customer was charged for the tires and installation in addition to
the oil change, it would likely become a customer complaint.
Inspector variance and accuracy were measured with a modi-
fied MSA based on attribute agreement. In a normal inspection
system MSA or gage repeatability and reproducibility (R&R),
the goal is to determine an inspectors ability to make the cor-
rect pass/fail decision for each of the given inspection points. In
this case, the inspection points are those listed by the TM 10
and TM 20 preventive maintenance checks and services (PMCS)
Figure 1 Faults present vs. faults found scatterplot
0
10
20
30
40
50
60
70
7 8 9 101 102 108 114 117 118 127 138
N
u
m
b
e
r

o
f

F
a
u
l
t
s
Inspectors
10/20 Faults
Present
10/20 Faults
Found
Individual Excess
or Unverifable
Collective Excess
or Unverifable
ASQ www.asq.org Page 4 of 7
standardclassic quality control vs. quality assurance. The
problem? It was very costly to the customer.
The cost of the inspector variance was measured by collecting
the parts and labor data associated with repairing those items
classified as above standard. The team used the database cre-
ated in the project selection phase, cross-referenced with the
parts cost data for each of the 436 sample HMMWVs. The result,
shown in Table 2, was an average parts and labor cost for each
fault recordedwithin standard and above standard. It showed
that above standard repairs accounted for 9.1 percent of total
labor costs, and 21.5 percent of parts cost.
Rounding out the current-state picture was the accuracy of the
inspection team. The inspection results were compared to the
PMCS checklist. The accept/reject decision was based on whether
or not a deficiency was recorded by the inspector. Based on the
within standard faults recorded, the average accuracy of the
inspection team was determined to be 74.5 percent. Combined
with the variance and cost measurements, the improvement team
faced quite a challenge, but they were committed to success.
Analyze
After the data was collected and processed, the team analyzed
for root causes, which are highlighted in Figure 3. They started
by completing a cause and effect diagram by answering the
five whys. This brought forward a total of 39 possible root
causes (12=interpretation of standards, 15=personnel, and
12=management), which were further studied using:
Correlation analysis to identify recurring themes within the
possible root causes
Affinity diagrams to visually document the root cause
correlations
Multivote to gain input from multiple business areas
(represented by team members) regarding the classification
of possible root causes for final root cause selection
Historical data and audit data to verify and validate the final
root causes
The final root causes were determined to be:
No clear voice of the customer (VoC): The customer quality
assurance representatives (QARs) conducting acceptance
inspections were subject to the same standard interpretation
errors as the contractor, which increased the confusion as to
what the customer wanted. This was validated by reviewing
Figure 2 Inspector variance before improvement
0
5
10
15
20
25
30
35
40
P
e
r
c
e
n
t
a
g
e

o
f

F
a
u
l
t
s
1 2 3 4 5 6 10 11 8 6 7
Frequency 34 20 20 5 5 5 5 5 1 0 0
# of Inspectors
Table 2 Parts and labor costs
Labor Parts
ID WMMS
Number
Faults Recorded Fault Standard Man Hours
Expended
Niin Nomenclature Qty
Required
Cost
Incurred
Excess
Before Tailgate seal strip torn Over-inspect 0.5 BUMPER 1 2.15 Yes
Before Cab canvas bow bent 10/20 Standard 1.0 BOW, VEHICULAR TOP 1 19.11 No
Before Engine idles high 10/20 Standard 4.0 GASKET 1 0.15 No
Before Shackles spring washers (M) Over-inspect 0.1 WASHER, SPRING TENSION 4 0.11 Yes
Before Ignition switch inop 10/20 Standard 0.4 SWITCH, ROTARY 1 35.75 No
Before Mirror not adjusted Over-inspect 0.3 COIN, WASH RACK 3 1 Yes
Before Door handle (M) 10/20 Standard 0.2 DOOR, VEHICULAR 1 66.18 No
Before Mirror not adjusted Over-inspect 0.3 RETAINER, OIL SEAL 1 1.08 Yes
Before Battery box cover cracked Over-inspect 0.5 COVER, BATTERY BOX 1 112.8 Yes
Before Door handle (M) 10/20 Standard 0.5 DOOR, VEHICULAR 1 66.18 No
Before Mirror not adjusted Over-inspect 0.3 GASKET SET 1 6.99 Yes
Before Troop strap missing 10/20 Standard 0.1 STRAP, WEBBING 1 23.09 No
Before Cargo canvas zippers torn 10/20 Standard 2.0 COVER, FITTED, VEHICULAR 1 180 No
Before Hood prop rod unserviceable 10/20 Standard 0.5 ROD, HOOD, VEHICULAR 1 7.32 No
Before Wait to start light inop 10/20 Standard 1.0 LABEL 1 0.27 Yes
ASQ www.asq.org Page 5 of 7
customer acceptance inspections on completed work orders
where an average of 40 percent of customer-identified faults
were above standard.
Lack of inspector training and accountability: This was
validated by internal audit of the inspection process.
Poor management communication and work loading: This
was validated by interviews, internal audit documentation,
and review of historical records of management meetings.
Improve
After identifying the final root causes, the team translated them
into opportunities for improvement.
No clear VoC: Obtain a clear and concise definition of the
standard from the customer.
Lack of training/accountability: Train inspectors correctly
and create a system for performance accountability.
Poor management communication and work loading:
Deemed beyond the scope and authority of the project team,
and not addressed.
Improvement began with the simple question, How? Five
hows to be more precise. Taking the two chosen improvements
of a.) acquiring a clear VoC and b.) training the inspectors
accordingly, the team set about to find the best way to accom-
plish them. The initial round of five hows and brainstorming
brought 22 possible solutions to the table.
To narrow the list, the team turned to more stakeholder inter-
views. Each stakeholder was the subject matter expert for their
area. Any solution that did not pass them was removed from the
list. For instance, management communication and work-loading
solutions were removed after interviewing the program man-
ager. He noted that this area went beyond the teams scope and
authority to address.
Then the team analyzed every process to ensure all possible solu-
tions met business, contractual, and regulatory requirements. For
example, the solution of tasking the government QARs to conduct
all initial inspections was removed because that task is not within
the scope of their duties, nor was it within the quality teams
authority to make it so. They also built scenarios with the possi-
ble solutions to analyze their potential effects if implemented. As
with the other tools, the solutions that projected a negative impact
on production or customer satisfaction were removed from the
list. The team was left with 10 possible solutions.
Analysis of the remaining 10 solutions was conducted using the
U.S. Army model for conducting a cost benefit analysis (CBA).
The focus was on using tools that would take into account con-
tract and regulatory compliance, criticality of the improvement,
range of effect, and cost. Tools such as multivote, PICK chart,
stakeholder interviews, and a criteria matrix were used by the
team to analyze each of the 10 remaining solutions, as seen in
Table 3 (on next page).
Based on the selection criteria, the team collected production
data, master resource lists, contractual and regulatory documents,
and stakeholder interviews to grade the impact of each possible
solution according to the CBA model.
The team used the analysis data to populate a criteria matrix and
rate the impact of the solutions in accordance with the criteria
set forth in the Army CBA model to achieve an overall impact
rating for each solution, as seen in Table 4 (on next page).
Figure 3 Root cause analysis
The standard is made
unclear by inspector bias
QARs cause
confusion
Approaches to
interpretation vary
Standard
interpretation
Inspectors inspect according by
experience, adding the PMCS
checks as an afterthought
Most inspectors are
retired motor sergeants
or officers who feel they
know the right way
The DA Pam 750-8 reference
to the judgment of the
inspector is misused to support
the preference of the inspector
Facility management practices
reduce inspector performance
Personnel performance hinders
uniformity of inspections
Multiple interpretations of
the Army standard bring
confusion and waste
There is extreme
variance between
inspectors in the IMD-C
Some inspectors
interpret bottom-up,
which follows the precise
wording of the standard
Between 2 and
10 QARs over
time none
with the same
interpretation
of the standard
QARs believe they
determine the standard
rather than simply
monitoring compliance
QARs direct
the inspectors
to exceed the
standard
The facility was
originally a
refurbishment
facility
Some inspectors interpret
top-down, which actually
implies deficiencies not
stated in the standard
ASQ www.asq.org Page 6 of 7
The final tally of scores in the right-hand column displayed
suitability. The team agreed that any final score of nine or
below would not be accepted. There were three solutions
for the VoC Standard Interpretation, but none addressing
the Inspector Accountability. So, the team brainstormed
one more time. The result was two additional high-scoring
final solutions:
Create an inspection site for an initial inspection team
Create a mobile final inspection team
By the nature of these solutions, the intent of the original final
solution of reducing the number of inspectors would be met.
There were five final solutions to both clarify the VoC in deter-
mining the correct interpretation of the standard and make sure
that inspectors were trained and held accountable for maintain-
ing that standard. The implementation plan was to:
Gain a clear VoC on interpretation of inspection standards
Designate initial and final inspection teams
Select and train the inspection teams
Stakeholder buy-in for solution implementation was not hard
to find. Communication with the stakeholders was ongo-
ing throughout the project, and all internal stakeholders were
actively engaged from project selection to project completion.
Perhaps the most significant buy-in came from the project
management when they directly implemented the No. 1 solu-
tion. They gave a well-defined VoC statement on interpretation
of inspection standards, which clearly described a bottom-up
approach to interpretationinspect only what the PMCS check-
list directs and no more. The judgment of the inspector was not
removed from the inspection process, but it was confined within
the limits of the inspection standard.
Inspection teams were created for initial and final inspections,
reducing the number of inspectors from 33 to 12. At the same
time, inspection check sheets were created based on the Army
TM10 and TM20 PMCS checklists. These check sheets gave
very concise guidance for inspections within the confines of the
standard, and were approved for use by the USG project managers.
Results and Control
The results of the improvements were tremendous. The team
monitored active work orders for the service and repair of
LBE equipment from one Army brigade. Every HMMWV
Table 3 Quality tools used to analyze 10 solutions
Tool/Method Why Anticipated Result
Review of
requirements
Assess solution compliance
with QMS, contract, and
Army regulations
Disqualify any solutions that
do not meet business and
regulatory requirements
Multivote Rank the possible
solutions by criticality
Identification of the easiest
solution with the highest payoff
PICK chart Visually display the multivote for
easier determination of solution
implementation readiness
Identification of the easiest
solution with the highest payoff
Stakeholder
interviews
SME collaboration to
support the second and
third effects analysis
Information to support the
second and third effects
of possible solutions
Second and third
effects analysis
Determine the cost and range of
effect for each possible solution
Identification of the least costly
solution with the greatest impact
Criteria matrix Rank solution suitability by
describing attribute data
in continuous terms
Rank possible solutions
by degree of suitability
Compliance Criticality
Range of
Effect
Cost/Funding
Solution
Selection
Table 4 Criteria matrix to determine impact ratings
Type of Impact: + = positive, - = negative Degree of Impact: 0 = none, 1 = low, 2 = medium, 3 = high
Root Cause Solution Time Cost Monetary
Cost
Production
Effect
Criticality Inspection
Effect
Compliance SCORE
VoC Standard Interpretation Inspection check sheet 0 0 +2 +3 +3 +3 11
VoC Standard Interpretation Clarify the VoC into specifics 0 0 +3 +3 +3 +3 12
VoC Standard Interpretation Retrain all inspectors -1 0 +3 +2 +3 +3 10
VoC Standard Interpretation Retrain managers in disciplinary actions -2 0 0 +1 +3 +3 5
Insp. Accountability One inspection site -1 -2 -1 +1 +2 +3 2
Insp. Accountability Mobile inspection team -2 -2 +1 +3 +2 +3 5
Insp. Accountability Regionally assigned inspectors 0 0 +1 +3 +2 +3 9
Insp. Accountability Create inspector to approve all inspections -3 -3 -1 +1 +3 +3 0
Insp. Accountability QMS rep reviews all inspections -3 0 -1 +1 +3 +3 3
Insp. Accountability Reduce the number of inspectors 0 0 -1 +3 +2 +3 7
ADDED
Insp. Accountability Initial inspection team (site) 0 0 +3 +3 +2 +3 11
Insp. Accountability Mobile final inspection team 0 0 +3 +3 +2 +3 11
ASQ www.asq.org Page 7 of 7
work order was inspected by the initial inspection team. All
in-process and final inspections were conducted by the final
inspection team.
The impact of the improvements was measured by analyzing
the work order documentation and conducting another inspector
MSA six months after the improvement. Similar to the initial
MSA, Watkins and Benson measured inspector variance and
its cost based on within standard and above standard faults.
They measured inspector accuracy using within standard faults
only. Finally, the project impact on parts and labor costs were
calculated from the analysis of work order documentation as it
was completed.
The teams efforts decreased inspector variance by 97 percent,
and increased inspector accuracy by almost 20 percent. At the
same time, they reduced the overall cost of service by 15 percent.
The savings came from the following:
Above standard related labor hoursreduced from
9.1percent to 1.8 percent
Above standard related parts costreduced from
21.5percent to 1.9 percent
Inspection labor hoursreduced inspection time by
45percent
This translated to an estimated $228,000 in savings for the test
brigade, which projected the potential for $1.6 million annual
savings across the whole post for contractor-led automotive
maintenance. Furthermore, the projects final results can be seen
in Table 5 (below).
Watkins and Benson were very pleased with the teams accom-
plishments. Most of the inspectors cared about the product so
much that they felt like they were lowering their standards to
meet production requirements, but after seeing the end result
they understood that it was about meeting the customers request.
In the end the team succeeded in proving it is possible to have
low cost, high quality, and fast service at the same time.
For More Information
To contact the author of this case study, email Griffith
Watkins at griffwatkins@gmail.com.
To read more examples of quality success, visit the ASQ
Knowledge Center Case Studies landing page at asq.org/
knowledge-center/case-studies.
About the Author
Griffith Watkins is the senior quality manager for Delfasco,
LLC. He has worked in DoD contracting from both the per-
spective of the government and contractor. An ASQ CQE and
former DCMA QAS, Watkins has achieved Defense Acquisition
Workforce (DAWIA) Level II certification in production, quality,
and manufacturing. He is also a DCMA endorsed Certified Six
Sigma Black Belt projected to complete an MS in quality assur-
ance at Southern Polytechnic State University by Spring 2015.
Table 5 Final results
Event Metrics Baseline Baseline Cost After
Improvement
Cost After
Improvement
Improvement/
Savings
Inspector Accuracy
(Based on number of TM xx-10/xx-20 faults actually present on the
vehicle inspected)
74.5% 94.3% 19.8% increase
Inspector Variance
(Based on number of TM xx-10/xx-20 faults actually present on the
vehicle inspected)
14.1% 0.3% 97.9% redux
Over-inspection of Equipment
(Percentage of faults identified outside of inspection criteria)
33% of faults
identified
12% 64% redux
Labor Cost
(Labor costs associated with the faults identified by over inspection)
Average 9.1% of
total labor cost
~ $89K/Brigade Average 1.8% of
total labor cost
~ 17.5K/Brigade 7.3% redux
~ 17.5K/Brigade
Parts Cost
(Cost of repair parts associated with the faults identified by
over inspection)
Average 21.5%
of total parts cost
~ $85K/Brigade Average 1.9% of
total parts cost
~ 7.5K/Brigade 19.6% redux
~ 77.5K/Brigade
Overall Inspection Time
(Sum of all inspection time throughout the life of the W.O.)
Average 13.5
man-hours / W.O.
~ $175K/Brigade Average 7.4
man-hours / W.O.
~ $96K/Brigade 45% redux
~ 79K/Brigade

También podría gustarte