Statistical Process Control in a Modern Production Environment...Statistical process control (SPC) consists of methods for understanding, moni-toring and improving process performance - [PDF Document] (2023)

  • General rights Copyright and moral rights for the publicationsmade accessible in the public portal are retained by the authorsand/or other copyright owners and it is a condition of accessingpublications that users recognise and abide by the legalrequirements associated with these rights.

    Users may download and print one copy of any publication fromthe public portal for the purpose of private study or research.

    You may not further distribute the material or use it for anyprofit-making activity or commercial gain

    You may freely distribute the URL identifying the publication inthe public portal If you believe that this document breachescopyright please contact us providing details, and we will removeaccess to the work immediately and investigate your claim.

    Downloaded from orbit.dtu.dk on: May 16, 2021

    Statistical Process Control in a Modern ProductionEnvironment

    Windfeldt, Gitte Bjørg

    Publication date:2010

    Document VersionPublisher's PDF, also known as Version ofrecord

    Link back to DTU Orbit

    Citation (APA):Windfeldt, G. B. (2010). Statistical ProcessControl in a Modern Production Environment. Technical UniversityofDenmark. IMM-PHD-2010-225

    https://orbit.dtu.dk/en/publications/71983c99-d87f-4e97-92d4-70ac083a5628

  • Statistical Process Control in a

    Modern Production Environment

    Gitte Bjørg Windfeldt

    StatisticsNovo Nordisk A/S

    Denmark

    &

    Department of Informatics and Mathematical ModellingTechnicalUniversity of Denmark

    Denmark

  • IMM–PHD–2010–225

    ii

  • Preface

    This thesis is the result of research I have carried out duringmy time as PhDstudent at Novo Nordisk A/S and Department ofInformatics and MathematicalModelling at the Technical Universityof Denmark.

    The origin of my research is a modern production environment inwhich a med-ical device is assembled. With its high frequentmeasuring close together inspace and time and advanced measuringsystems, this production environmentintroduces new challenges forthe practitioners working with statistical processcontrol (SPC).Being an industrial PhD1, the purpose of my research has beentoaddress some of these challenges. I have therefore put an extensiveamount ofwork into understanding the production in detail and tounderstand the prob-lems the engineers are facing in relation tothe SPC system. I have chosen tofocus on three different topicswhich I believe are important to both the pro-duction at hand andin general. For reasons of confidentiality little isdisclosedregarding the actual production and data has beentransformed when relevant.

    The thesis is organized as a survey followed by four papers thatare groupedinto the three chosen topics:

    Survey

    We begin with an introduction to SPC describing the backgroundfor thepapers in the thesis. Then we elaborate on the papers,describing themotivation behind them and their statisticalinterest, and elaborating on

    1An industrial PhD is a business focused PhD project where thestudent is employed at acompany and enrolled at the university atthe same time – working full time on the project.

    iii

  • relevant issues. Finally we give directions of future work.

    Extending Phase I

    The first topic is about extending the control chart setup phaseknownas Phase I. In the classical control chart setting, Phase I isprimarilyused to determine the limits of the control chart based ona sample ofthe process. With the proliferation of computers andstatistical software,much more can be done in this phase to gain abetter understanding of theprocess at hand. With the increasingcomplexity of modern productionsenvironment this is not onlyhelpful, but also necessary – not least toverify the assumptions ofthe control method chosen for monitoring.

    Paper 1 is ”Testing for Sphericity in Phase I Control ChartApplica-tions” which is joint work with Søren Bisgaard (seeWindfeldt andBisgaard (2009)). The paper is published in QualityReliability En-gineering International, 25, pp. 839 – 849, 2009.Thepaper is aimed at practitioners in order to help them testtheassumption of independence and variance homogeneity withinsub-groups. An assumption that is essential when using classicalShe-whart charts.

    Process Monitoring

    With frequent measuring of a process all small variations in theprocessis registered. These small variations will be detected veryquickly byusual control methods. For a highly capable process thisresults in a lotof alarms even though the quality is notthreatened. Therefore a moreflexible monitoring method – hiding anycomplexity from the end user –is required.

    Paper 2 is ”Using Predictive Risk in Process Control” which isjointwork with Jean-Francois Plante (see Plante and Windfeldt(2009)).The paper is to be submitted for publication. The paperdescribesa new method for process monitoring. The method uses astatisticalmodel of the process and a sliding window of data toestimate theprobability that the next item will be outsidespecifications. Themethod is explored numerically and a case studyis provided.

    iv

  • Paper 3 is ”Monitoring a Bivariate Process using PredictiveRisk”, whichis work in progress. The manuscript explores the newmethod pre-sented in Plante and Windfeldt (2009) in a bivariatesetting.

    Missing Values

    The last topic deals with a problem regarding missing valueswhich Idiscovered when exploring data from the production. Whilediscussingthis with Søren Bisgaard it was brought to my attentionthat this was notan isolated case – it was seen from time to timewhen working with datafrom industry. I also discussed it with theproduction engineer and wedecided that it would be relevant toexplore this issue further.

    Paper 4 is ”Assessing the Impact of Missing Values on QualityMea-sures of an Industrial Process” which is joint work with NielsVæverHartvig (see Windfeldt and Hartvig (2010)). The paper issubmittedfor publication. The paper is a case study on a problem ofmissingvalues in which we asses the impact of the missing values onthe qual-ity measures of the process. We also provide guidelinesalong withsoftware to handle similar issues.

    Søren Bisgaard deserves a special thanks for many inspirationaldiscussions onindustrial statistics and for supervising me into thearea of research – it hasmeant a lot to me both professionally andpersonally. I also wish to thank himand his wife Sue Ellen fortheir generous hospitality when my husband and Istayed in Amherst.It has been a pleasure working with Jean-Francois Planteand I thankhim for that, and for being understanding for my, at times,longresponse time. I have also enjoyed working with my supervisorand colleagueNiels Væver Hartvig and I thank him for manyinteresting discussions and fortaking the time to be a catalyst formy work when this was needed. I also wishto thank my universitysupervisor Helle Rootzén for accepting me as her PhDstudent, andfor letting me pursue my own statistical interests.

    I thank Per Vase and Birger Stjernholm Madsen who originallysuggested thisproject. Furthermore I thank Per for sharing hisknowledge and thoughts onprocess control with me. Also I would liketo thank my colleagues at the produc-tion for sharing theirknowledge and answering my many questions, especiallyJacobMosesson, Jørgen Toft, Lasse Langkjær and Søren Bøgvad Petersen–their input has been extremely helpful.

    v

  • I wish to thank my colleagues at Statistics and my manager SilleEsbjerg forsupporting me both professionally and personally throughthese past years.

    Last but not least I wish to thank my husband Troels anddaughter Emma fortheir continuous love and support.

    Allerød,June 29, 2010, Gitte Bjørg Windfeldt.

    vi

  • Contents

    I Survey 1

    1 Introduction to Statistical Process Control 3

    1.1 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . .. . . . . 3

    1.2 Shewhart Control Charts . . . . . . . . . . . . . . . . . .. . . . 7

    1.3 Process Performance . . . . . . . . . . . . . . . . . . . .. . . . . 10

    1.4 Monitoring High Performance Processes . . . . . . . . . . .. . . 12

    2 Elaboration on the Papers 19

    2.1 Testing for Sphericity in Phase I Control Chart Applications. . 19

    2.2 Using Predictive Risk for Process Control . . . . . . . . .. . . . 26

    2.3 Assessing the Impact of Missing Values on Quality Measuresofan Industrial Process . . . . . . . . . . . . . . . . . . . . . .. . . 34

    3 Conclusion and Future Work 39

    vii

  • II Extending Phase I 41

    4 Testing for Sphericity in Phase I Control Chart Applications43

    4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . 44

    4.2 Background, Definitions and Notation . . . . . . . . . . . .. . . 45

    4.3 Test for Sphericity . . . . . . . . . . . . . . . . . . . .. . . . . . 49

    4.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . 50

    4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . 58

    III Process Monitoring 59

    5 Using Predictive Risk for Process Control 61

    5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . 62

    5.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . 63

    5.3 Properties of the Method in a Simplified Setting . . . . . .. . . 65

    5.4 Illustration of the Method with Continuous Evaluation . . .. . . 70

    5.5 A Case Study . . . . . . . . . . . . . . . . . . . . . . . .. . . . . 79

    5.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . 85

    6 Monitoring a Bivariate Process using Predictive Risk 89

    6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . 90

    6.2 Properties of the Method in a Simplified Setting . . . . . .. . . 90

    6.3 Illustration of the Method with Continuous Evaluations . . .. . 92

    6.4 A Case Study . . . . . . . . . . . . . . . . . . . . . . . .. . . . . 94

    viii

  • IV Missing Values 107

    7 Assessing the Impact of Missing Values on QualityMeasures ofan Industrial Process 109

    7.1 Process Description . . . . . . . . . . . . . . . . . . . .. . . . . . 110

    7.2 Problem Description . . . . . . . . . . . . . . . . . . . .. . . . . 111

    7.3 Data Collection . . . . . . . . . . . . . . . . . . . . . .. . . . . . 112

    7.4 Analysis and Interpretation . . . . . . . . . . . . . . . .. . . . . 113

    7.5 Conclusion and Recommendations . . . . . . . . . . . . . . .. . 127

    A Implementation of the EM Algorithm 129

    A.1 Implementation in R . . . . . . . . . . . . . . . . . . . .. . . . . 129

    A.2 Implementation in Visual Basic for Excel . . . . . . . . . .. . . 131

    B Data and Preimages 135

    Bibliography 140

    Summary in English 147

    Summary in Danish 149

    ix

  • x

  • I

    Survey

    1

  • Chapter 1

    Introduction to StatisticalProcess Control

    Statistical process control (SPC) consists of methods forunderstanding, moni-toring and improving process performance overtime. In this section we will givean introduction to statisticalprocess control. The focus will be on presentingthe relevantbackground for the papers in the thesis. For a broaderintroductionto SPC the reader is referred to e.g. Montgomery(2005).

    1.1 Basic Concepts

    In this section we present some basic concepts in SPC. Forfurther introductionto these concepts the reader is referred toWoodall (2000) and the subsequentdiscussion papers in the sameissue.

    1.1.1 Causes of Variability and Statistical Control

    In any process there will be some variability which consist ofcontributionsfrom various sources. Two main concepts in SPC is thenotion of chance and

    3

  • assignable causes. In the literature this is also denoted commoncause andspecial cause, respectively. The common cause variabilityis variability that ispredictable from a statistical perspective.Assignable cause variability is vari-ability that is not explainedby common causes, i.e. shocks, disruptions, trends,increasedvariability that – at least in theory – can be traced to a specificcause.

    A process is in a state of statistical control when it is in astate where thevariation can be contributed to a system of chancecauses that do not appear tochange over time. If a change occurs,the process is said to be out of statisticalcontrol.

    Traditionally this has been interpreted as the process having anunderlying(normal) distribution which do not change over time. Thisinterpretation hasbeen generalized over the years to include, forinstance, time series models (seee.g. Bisgaard and Kulahci (2005)),and variance components models (see e.g.Woodall and Thomas (1995)and Roes and Does (1995)).

    1.1.2 Control Chart

    A control chart is a visualization of a quality characteristicof a process overtime. The quality characteristic is calculatedbased on a sample of the processand depicted versus time or samplenumber. Besides the values of the qualitycharacteristic, a controlchart also consist of a horizontal line which representsthe averagelevel of the characteristic, denoted the center line, and twohorizontallines, denoted upper control limit (UCL) and lowercontrol limit (LCL). Thecontrol limits are chosen such that theprobability of plotting inside the controllimits is high when theprocess is in statistical control. An example of a controlchart isgiven in Figure 1.1.

    As mentioned in Woodall (2000) a main objective of the controlchart is to ”dis-tinguish between common cause variation andspecial causes variation to preventoverreaction and underreactionand thereby reducing variability and maintainingstability.”

    4

  • 2 4 6 8 10

    Time/Sample No.

    ● ●

    CL

    UCL

    LCL

    Figure 1.1: An example of a control chart.

    1.1.3 Phases

    Traditionally two phases are considered when working with SPCand controlcharts:

    Phase I is a retrospective analysis on a set of historical data.The purposeof this phase is to asses whether the process was in astate of statisticalcontrol and to estimate the parameters of theunderlying probability distri-bution. In this phase control chartsare used retrospectively to determinewhether the process was instatistical control.

    Phase II is the monitoring phase where samples are gatheredsequentially overtime to maintain the process in a state ofstatistical control. In this phasethe control chart is usedprospectively to monitor the process.

    The assumptions about the quality characteristics are verydifferent in the twophases. In Phase I usually very little isassumed about the process whereas in

    5

  • Phase II, the process is assumed to be in statistical controland the parametersof the process are assumed to be known or wellestimated. Phase I thereforeresembles exploratory and confirmatorydata analysis whereas Phase II moni-toring in many ways resembleshypothesis testing1.

    Control charts are used in both phases but the measure ofperformance of thechart is not the same. In Phase I applicationsmeasures such as the false alarmrate2 is used to measure controlchart performance. In Phase II the average runlength is a commonmeasure of control chart performance, even though the runlengthdistribution is skewed to the right.

    The majority of the SPC literature focus on control charts forPhase II monitor-ing of process. But as noted in Woodall (2000) itusually takes a lot of work andprocess understanding to get fromPhase I to Phase II. This view on the impor-tance of Phase I isshared by Thyregod and Iwersen (2000) who states: ”It isourexperience that the so called Phase 1 analysis is by far themost important partof SPC. It is in this phase insight in theprocess and transmission of variation isobtained using the wholebattery of tools in the statistical toolbox to explore thedata.” Inrecent years there has been more focus on the pre-monitoringphaseand it has been argued that the traditional view on Phase Ineeds to be widened.In Palm (2000) it is suggested to use threephases instead of two:

    A. Charts Setup is a retrospective analysis on a set ofhistorical data. Thepurpose of this stage is to get trial controllimits to begin real-time moni-toring. Based on historical data3the appropriate chart statistics are plot-ted together with trialcontrol limits. If there are any signals we mightconsider revisingthe control limits with these points removed. There isnot at thispoint in time made a judgment of whether or not the processwas instatistical control. The stage could be iterative if it turns outthatthe data is not suitable for the purpose.

    B. Process Improvement is a prospectively phase where theprocess is im-proved. As samples comes in from the process thecontrol chart statisticis plotted on the chart with trial limits.When signal occurs they are in-vestigated and if the cause is foundit is sought removed through some

    1There is some disagreement about these issues in theliterature, see Woodall (2000).2The probability of at least onesignal from the chart given that the process is in statistical

    control.3The data could have been gathered previously notnecessarily for the purpose of a control

    chart setup.

    6

  • form of process improvement. The control limits are revised asneeded toreflect the improvements made. The phase ends when signalsbecome rareand the process is considered to be stable.4

    C. Process Monitoring is a monitoring phase where the process ismonitoredto detect assignable causes and maintain stability. Inthis phase the controlchart is used prospectively to monitor theprocess.

    This view emphasizes more clearly than the traditionallyapproach that thereis a lot of work to be done on getting to knowones process before the processis in statistical control and theactual monitoring starts. A similar approach issuggested in atechnical advice for practitioners given in Vining (2009).

    The increased attention on the pre-monitoring phase has resultedin more focusin the literature on control charts specificallydesigned for use in this phase, seee.g. Chakraborti et al. (2009)and Jones-Farmer et al. (2009). A main part ofPhase I is toestimate the parameters used for the subsequent monitoring. Howtoestimate these parameters and the influence of estimation onperformanceof the control chart in the monitoring phase areconsidered in e.g. Braun andPark (2008) and Jensen et al. (2006).Also different suggestions and guidelinesfor analysis to beperformed in the pre-monitoring phase has appeared, see e.g.Mastand Trip (2009), Anderson and Whitcomb (2007), Bisgaard andKulahci(2005) and Windfeldt and Bisgaard (2009).

    1.2 Shewhart Control Charts

    The Shewhart control charts for monitoring processes weredeveloped by Shew-hart in the 1920s. These charts are by far themost well-known and widely usedcontrol charts in industry. In thissection we introduce the Shewhart controlcharts.

    The Shewhart charts are based on the assumption that the qualitycharacteristicX we wish to monitor is normally distributed withmean µ and variance σ2. Aslong as the process operates according tothis assumption it is said to be instatistical control, i.e. thecommon cause variability is described by σ2. At timet we take asample from the process of size n. It is assumed that theobservations

    4Palm notes that this stage could take years.

    7

  • in the sample are independent normally distributed. We furtherassume thatthe sample is a rational subgroup, meaning that the onlyvariation within thesample is caused by common causes. For the casen = 1 the reader is referredto Montgomery (2005).

    The x̄ chart is a control chart that monitors the mean of theprocess and itspurpose is to catch unusual variability betweensamples. The charting statisticis the mean value of the samplewhich will be normally distributed with meanµ and variance σ2/n.Assuming that µ and σ is known, the center line of the x̄chart isequal to µ. The control limits are of the form

    LCL = µ− k σ√n

    and UCL = µ+ kσ√n,

    where k is either chosen to be the (1 − α/2) quantile of thestandard normaldistribution yielding what is denoted probabilitylimits, or as an integer yieldingwhat is denoted kσ-limits. Withprobability limits the probability of plottingoutside the limits,if the process is in statistical control, is α. When usingkσ-limits, k is usually chosen to be 3, resulting in a probabilityof 0.0027 of beingoutside the control limits when in control.

    The R chart is a control chart that monitors the range of theobservations withina sample and its purpose is to catch unusualvariability within subgroups. Thecharting statistic is the range ofthe sample,

    R = max(X1, . . . , Xn)−min(X1, . . . , Xn).

    The mean and standard deviation of R is usually expressed interms of the meanand standard deviation of the relative range R/σ.The mean and standarddeviation of this distribution is usuallydenoted d2 and d35 and values of d2and d3 for sample sizes 2 ≤ n ≤25 can be found in Appendix Table VI inMontgomery (2005). Thecenter line for the $R chart is therefore d2σ and3σ-limits are

    LCL = d2σ − 3d3σ and UCL = d2σ + 3d3σ.

    The R chart has traditional been used because it is easy tocalculate comparedto the standard deviation but today this is usualnot an issue.

    5Suppressing that they depend on number of observations in thesample.

    8

  • The charting statistic of the s chart is the sample standarddeviation,

    s =

    √√√√ 1n− 1

    n∑i=1

    (Xi − X̄)2.

    The distribution of√

    (n− 1)s/σ follows a χ distribution with n − 1 degreesof freedomand the mean and standard deviation of s is therefore c4σ and√

    1− c24σ, where c4 =√

    2/(n− 1)Γ(n/2)/Γ((n− 1)/2). Values of c4 for2 ≤ n ≤ 25 can befound in Appendix Table VI in Montgomery (2005). Thecenter line forthe s chart is therefore c4σ and 3σ-limits are

    LCL = c4σ − 3√

    1− c24σ and UCL = c4σ + 3√

    1− c24σ.

    In practice µ and σ are rarely known and have to be estimatedfrom thedata in the pre-monitoring phase. Traditionally this isdone based on at leastm = 25 samples. An estimate of µ is the meanof the means for each sampleµ̂ = ¯̄x = 1/m

    ∑mi=1 x̄i. An estimate of the range is the mean value of theranges

    for each sample, i.e. R̄ = 1/m∑mi=1Ri. The standard deviation σcan be

    estimated based on the range or the standard deviation of thesamples. Tradi-tionally the range has been used because of itscomputational simplicity, todaythis simplicity is usually lessrelevant. An unbiased estimator of σ based on therelative range isR̄/d2. As mentioned in Montgomery (2005), the range workswell forsmall sample sizes n ≤ 6. For larger values of n ≥ 10, the rangeisloose efficiency compared to the standard deviation. An unbiasedestimator ofσ based on the sample standard deviation is σ̃ =1/(mc4)

    ∑mi=1 si.

    The properties of the chart is usually derived under theassumption that theparameters are known. As mentioned in Section1.1.3, the properties of a controlchart in the monitoring phase isnormally described by the average run length(ARL). In general, theaverage run length for a Shewhart chart when the processis instatistical control is ARL0 = α−1. The x̄ chart with 3σ-limitstherefore hasan average in control run length of ARL0 = 1/0.0027 =370.4. The ARL for thex̄ chart when the process experiences a shiftin the mean can be determined byconsidering the operatingcharacteristic curve (OC-curve) which describes theprobability ofnot detecting a shift in the first sample after the shifthappened.This is also called the β-risk or type II error. Let µ bethe level that the controlchart is based on and let µ1 be the newlevel. We assume that the variance

    9

  • remains constant. The β-risk is then given by

    P (LCL ≤ X̄ ≤ UCL|µ1 = µ+ kσ).

    OC-curves for the x̄ chart with known standard deviation isgiven in Figure 5.13in Montgomery (2005). Based on these the out ofcontrol average run lengthcan be calculated by ARL = 1/(1− β), seeFigure 5.15 in Montgomery (2005).

    1.3 Process Performance

    A quality characteristic usually has to meet some predescribedspecifications.For a univariate quality characteristic these areusually given as an upper spe-cification limit (USL) and a lowerspecification limit (LSL) which is the highestand lowest acceptablevalue of the characteristic, respectively. In some casesonly a oneside specification in prescribed. If the quality characteristicdoesnot meet the specifications we say that the resulting productis nonconforming.There might also be prescribed a target value (T)of the quality characteristicwhich is the desired value of thecharacteristic.

    The control charts ignores the specification limits andtherefore does not sayanything about the process’ ability to meetthe specifications. The fraction ofnonconformities is a naturalmeasure of process performance, but this valuehas traditionallybeen difficult to calculate. An alternative way ofquantifyingprocess performance is to use capability indices. Belowwe briefly describethe most widely used capability indices inindustry and their relation to thefraction of nonconformities. Foran introduction to capability indices see e.g.Montgomery (2005),Kotz and Lovelace (1998), Kotz and Johnson (2002), andSpiring etal. (2003).

    The most well-known and widely used capability indices inindustry are Cpand Cpk. They are both based on the assumption thatthe process is normallydistributed with mean µ and variance σ2 andare given by

    Cp =USL− LSL

    6σand Cpk =

    min (µ− LSL,USL− µ)3σ

    .

    We can see that the difference between Cp and Cpk is that Cpknot only con-siders the process variation, but also the location ofthe process. Assuming

    10

  • that the process is perfectly centered at the midpoint of thespecification in-tervals i.e. µ = (USL + LSL)/2, the relationshipbetween Cp and the fractionof nonconformities p is p = 2Φ(−3Cp),where Φ is the cumulative distributionfunction of the standardnormal distribution. In general we have p ≥ 2Φ(−3Cp)for all µ. Therelationship between Cpk and the fraction of nonconformities isnotone-to-one, but Cpk provides upper and lower bounds on the fractionofnonconformities given by

    Φ(−3Cpk) ≤ p ≤ 2Φ(−3Cpk).

    Using both Cp and Cpk, the relationship to the fraction ofnonconformities isgiven by p = Φ(−3(2Cp − Cpk)) + Φ(−3Cpk).

    Another index that is also used in practice which is also basedon the assumptionof normality is

    Cpm =USL− LSL

    6√σ2 + (µ− T )2

    ,

    where T is the target value. The relation between the index Cpmand thefraction of nonconformities is described in Section 3.3 inKotz and Lovelace(1998) and some of the results are mentionedbelow. We assume for conveniencethat USL + LSL = 0 and let d = USL= −LSL. If the target value isequal to the midpoint of thespecification interval, the expected fraction ofnonconformitiesis

    p = Φ

    (−d− µ√λ2 − µ2

    )+ 1− Φ

    (d− µ√λ2 − µ2

    ),

    where λ = d/(3Cpm). Analytic studies of this function as afunction of µ hasshown that:

    1. The function is symmetric about 0.

    2. If Cpm > 1/√

    3 there is a local maximum at µ = 0.

    3. If Cpm < 1/√

    3 there is a local minimum at µ = 0.

    4. If Cpm < 1/3 then p increases with |µ|.

    5. If 1/3 < Cpm < 1/√

    3 then p has a local maximum at µ ± µ0 for someµ0 6= 0.

    11

  • If the target value is not equal to the midpoint of thespecification interval itis worth noting that the value of Cpm canincrease even though the fraction ofnonconformities increases.

    1.4 Monitoring High Performance Processes

    In this section we are going to review some of the methodssuggested in theliterature for monitoring high performanceprocesses based on fraction of non-conformities or capabilityindices.

    1.4.1 The Acceptance Chart

    The acceptance chart which monitors the fraction ofnonconformities using anx̄ chart was introduced by Freund (1957).The acceptance chart is intended tobe used when a process has avery high capability, meaning that the variabilityof the process isvery low compared to the size of the specificationinterval.Assuming that the quality characteristic is normallydistributed and the varianceis known and constant, the mean isallowed to vary as long as the fractionof nonconformities isconsidered acceptable. The acceptance chart takes intoaccount boththe risk of type I error (rejecting a process that is at anacceptablelevel) and type II error (accepting a process at anunacceptable level) and itis closely related to acceptance samplingof variables with known variance. Inacceptance sampling, type Ierror and type II error are also called the producersrisk andconsumers risk, respectively. To understand the design of anacceptancechart we first review some important concepts fromacceptance sampling.

    The acceptable quality level (AQL), denoted δ, is the highestfraction of noncon-formities that we are willing to accept. Theacceptable process level (APL) is theprocess level, denoted µδ,which corresponds to the AQL. Let LSL denote thelower specificationlimit and USL denote the upper specification limit. Undertheassumption of a normal distribution with known variance σ2 wehave

    µδ,low = LSL+ Zδ · σ and µδ,up = USL− Zδ · σ,

    where Zδ is the (1− δ) quantile of the normal distribution. Wenote that onlyone specification limit is ”active” at a time becauseof the high capability of theprocess.

    12

  • Equivalently, the rejectable quality level (RQL), denoted γ, isthe lowest fractionof nonconforming we would like to reject. Therejectable process level (RPL)is the process level µγ whichcorresponds to the RQL. Equivalent to above wehave

    µγ,low = LSL+ Zγ · σ and µγ,up = USL− Zγ · σ,

    under the assumption of a normal distribution with knownvariance σ2.

    When sampling from the process we have a risk, denoted α, ofaccepting aquality lower than δ (type I error), and a risk, denotedβ, of rejecting a qualityhigher than γ (type II error).

    There is essentially three ways to design an acceptancechart:

    1. Specifying δ, the corresponding probability α, and the samplesize n.

    2. Specifying γ, the corresponding probability β, and the samplesize n.

    3. Specifying δ, γ, and the corresponding probabilities α andβ.

    The control limits of the x̄ chart based on the first designare

    LCL = LSL+(Zδ −

    Zα√n

    )σ and UCL = USL−

    (Zδ −

    Zα√n

    )σ.

    Equivalently for the second design we get

    LCL = LSL+(Zδ +

    Zβ√n

    )σ and UCL = USL−

    (Zδ +

    Zβ√n

    )σ.

    When using the third design we choose the sample size so thatthe control limitsfrom the first and second design are equal. Thisgives us a sample size of

    n =Zα + ZβZδ + Zγ

    .

    Especially when using design 3 it is important to considerwhether a sampleof size n can be considered to be a rationalsubgroup. Control limits based ondesign 1 are also called modifiedcontrol limits (see Hill (1956) and Montgomery(2005)).

    13

  • Note that when using the acceptance chart it is necessary toknow the varianceσ2 or at least have a good estimate. It is usuallyrecommended to assure thatthe variance remains constant by using anR chart or s chart.

    Different optimizations and generalizations of the acceptancechart have beenconsidered over the years. Wu (1998) considers anadaptive acceptance chartfor the tool wear problem, where thesamples size is adjusted depending onhow close the mean is to thespecification limits. Holmes and Mergen (2000)proposed to use anExponential Weighted Moving Average chart (EWMA) formonitoringinstead of the x̄ chart. The EWMA chart takes individualobserva-tions allowing a stop/go decision after each observation.The acceptance chartfor non-normal processes which can beapproximated by a Burr distribution wasintroduced by Chou et al.(2005). The multivariate normal case is consideredin Wesolowsky(1990), Wesolowsky (1992) and Steiner and Wesolowsky (1994).Theirperspective is both acceptance sampling and acceptance charts.Theyconsider a design based on specifying the acceptable qualitylevel as well as thecorresponding probabilities α and β. Theydetermine the necessary sample sizeand control limits based onminimizing a sample cost function.

    1.4.2 Monitoring Capability Indices

    Instead of monitoring the fraction of nonconformities Spiring(1991) suggestedto monitor a capability index in the presence of asystematic assignable cause.

    The assumptions behind the long term variation of the processconsidered arenot defined mathematically. Spiring states ”. . .assume only the existence of asystematic assignable causepossessing a reasonably predictable recurring patternwith knownupper specification limit, lower specification limit and targetvalue.”This is illustrated with a figure depicted in Figure 1.2. Onthe short term –within a sample – the observations from the processare assumed to be inde-pendent normally distributed, possibly withthe mean following a linear trend.

    This dynamic use of the capability index is different thantraditional use. Tra-ditionally the capability index is calculatedwhen the process is considered tobe stable and as long as itremains stable, the determined value of the index isa measure ofthe process performance.

    14

  • 126 FRED A. SPIRING

    time, (iii) cost of producing and identifying non- conformingproduct, (iv) sampling/ monitoring costs, and (v) production levelsand speeds. In order to maintain a minimum level of capability theprocess will be monitored using procedures similar to those used incontrol charting. When the process reaches some specified minimumlevel a warning should be issued indicating that the ability toproduce conforming product is near an end.

    Dynamic Process Capability

    The most general case discussed will assume only the existenceof a systematic assignable cause possessing a reasonablypredictable recurring pattern with known upper specification limit(USL) , lower specification limit (LSL) , and target value (T)(Figure 1). The process specifications (i.e., USL, T, and LSL) ;the starting, stopping, and change times (i.e., to, t" t2, t3); andthe process output have been included in Figure 1. The variation isdepicted in a non-linear, increasing fashion but could be anyreasonably consistent recurring shape. The change times mayrepresent chronological time but are more likely to representproduction quantities. The variation due to the assignable cause,although similar in shape in each cycle, is not identical,emphasizing the heterogeneity that may exist among cycles.

    Permitting the process capability to be dynamic results in thecapability index rising and falling over each cycle. Maximumcapability becomes a parameter of the process and will occur atsome combination of the inherent variation and proximity to thetarget. The capability of the process depicted in Figure 1 has beensketched in Figure 2. At to the process is below target and as theprocess ages the systematic assignable cause produces a shifting ofthe process toward the target, causing the process capability toincrease. As the tool continues to age, eventually movement is

    Quality

    Characteristic

    USL

    Targe

    LSL

    11 12 Production

    FIGURE 1. An Example of a Toolwear Problem.

    Journal of Qualify Technology

    la

    Cpm

    FIGURE 2. Plot of the Changing Capability of a Process

    Exhibiting T oolwear.

    away from the target and consequently the process capabilitydiminishes. This pattern is repeated over each cycle. The generalmodel allows both proximity to the target and the inherentvariability to be dynamic and as a result the maximum capabilitywill be a parameter of the process and may occur at any point overa cycle. For the special case where the inherent variation isconsidered to be constant over a cycle, the maximum capabiliLY willoccur when the process is producing at the target.

    Consider a tool wear example taken from Grant and Leavenworth(1974) where the USL = 0.6480, LSL = 0.6400, and 13 subgroups ofsample size five resulted in the data in Table 1. Assuming the tooldoes not deteriorate within a subgroup, Cpm has been calculated foreach subgroup using T = 0.6440 and the algorithm

    USL - LSL Cpm = --r======== (1i)2 + n(:i - T)2 6 d2 n - 1Plotting the values of Cpm versus its associated subgroup number(see Figure 3) illustrates the general

    Subgroup

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    TABLE 1. Tool Wear Data from Grant

    and Leavenworth (1974)

    x R

    0.6417 0.0011

    0.6418 0.0016

    0.6424 0.0010

    0.6431 0.0015

    0.6433 0.0009

    0.6437 0.0010

    0.6433 0.0014

    0.6436 0.0004

    0.6441 0.0006

    0.6444 0.0011 0.6456 0.0009 0.6457 0.0007

    0.6454 0.0009

    cpm

    0.5110

    0.5243

    0.7271

    1.1357

    1.5456

    2.5421

    1.3817

    2.8046

    5.0028

    2.1168

    0.7305

    0.6939

    0.8298

    Vol. 23, No.2, April 1991

    Figure 1.2: Figure 1 from Spiring (1991) illustrating a toolwear process. Thevalues t0, . . . , t3 are changing points (time oritem number) where the processis reset.

    To be able to determine the instantaneous capability of aprocess that is influ-enced by an assignable cause it is suggestedto consider small time periods, e.g.a sample size n of between 5and 25 observations where the capability can bereasonableestimated. Further it is suggested that the process capabilityshouldreflect the proximity to a target value and the variabilitydue to random causesonly6. This leads to determining the capabilityof the process at time t by acapability index of the form

    Cpm =USL− LSL

    6√σ2rt + (µt − T )2

    ,

    or

    C∗pm =min (USL− T, T − LSL)

    3√σ2rt + (µt − T )2

    ,

    where µt is the mean of the process at time t and σrt is thevariance at timet due to random causes only. As noted in Kotz andLovelace (1998) C∗pm wasintroduced in Chan et al. (1988) as ageneralization of Cpm to account forasymmetric specificationlimits. Note that C∗pm = Cpm if the target value is

    6Note that the focus on the target is different from theacceptance chart where any levelis good enough as long as thefraction of nonconformities is acceptable.

    15

  • equal to the midpoint between the upper and lower specificationlimit. In Spiring(1991) both indices are denoted Cpm.

    If the effect of the assignable cause is linear over the samplewindow Spiringsuggests to use the variance estimate s2t from alinear regression over the samplewindow as a measure of the randomcause variation. This leads to an estimatorof the capability indexC∗pm given by

    Ĉ∗pm =min (USL− T, T − LSL)

    3√

    (n−2)(n−1)s

    2t +

    n(x̄t−T )2n−1

    .

    Assuming that the quality characteristic is normally distributedwith mean µand variance σ2 the distribution of Ĉ∗pm is anon-central χ

    2 distribution withn− 1 degrees of freedom and non-centralityparameter λ = n(µ−T )2/σ2 of theform

    f(x) = exp

    (−1

    2

    ((n− 1)C∗pm

    2(1 + λ/n)x2

    + λ

    ))

    ·∞∑j=0

    ((n−1)C∗pm

    2(1+λ/n)

    x2

    )n−1/2+j−1Γ(n−1

    2 + j)

    22j+(n−1)/2j!,

    see Spiring (1991). We note that the distribution of Ĉ∗pmdepends on λ and C∗pm

    (or alternatively on µ and σ).

    As with the acceptance chart the process is allowed to continueas long as thecapability index is above a prespecified value with adesired level of confidence.The reaction limit is based on a testof the hypothesis that Cpm = C∗pm0 andλ = λ0. The value of C∗pm0 isdirectly specified as the minimum acceptable valuefor thecapability index and λ0 is based on specification of the samplesize n anda value of |µ0 − T |/σ0. The reaction limit LCL for adesired level of confidenceα is then determined by theexpression

    Pλ0,Cpm0 (Ĉ∗pm < LCL) = 1− α.

    The procedure at time t is then to calculate Ĉ∗pm based on asample of size nand if Ĉ∗pm > LCL it is concluded that Cpm >Cpm0 with confidence 1− α.

    16

  • Note that since the distribution of Ĉ∗pm depends on both λ andC∗pm (or alter-

    natively on µ and σ) the confidence level at time t will dependon the mean andvariance of the process at time t. This is notelaborated on in Spiring (1991).

    The hypothesis testing approach suggested in Spiring (1991) canalso be used forthe indices Cp and Cpk. The control limits can bedetermined from the densityof the estimator and a specified minimumacceptable quality. The density of theestimators derived from theassumption of a normal distribution can be foundin Kotz andLovelace (1998). In Chou et al. (1990) the minimal value oftheestimated value (the control limit in Spiring’s approach) for aconfidence levelof 95% is calculated for various values of theminimal acceptable quality from0.7 to 3.0 and sample sizes from 10to 400. The hypothesis testing procedureused in Spiring (1991) wasoriginally suggested for Cpm in Chan et al. (1988).

    In Castagliola and Vännman (2007) is suggested a method formonitoring anunstable process by monitoring the family ofcapability indices Cp(u, v) by anEWMA approach. The family ofcapability indices Cp(u, v) includes the well-known indices Cp,Cpk, Cpm see Vännman (1995). It is defined by

    Cp(u, v) =d− u|µ− T |

    3√σ2 + v(µ− T )2

    where d = (USL−LSL)/2 and u, v are non-negative. Note that Cp =Cp(0, 0),Cpk = Cp(1, 0), and Cpm = Cp(0, 1)..

    The method is a generalization of the approach for monitoring Cpsuggestedin Castagliola (2001). The quality characteristic isassumed to follow a normaldistribution, but the process need not bestable as long as the capability ofthe process is constant. Theassumption of a constant capability separate thismethod from themethods considered above.

    At fixed time intervals a sample of n observations is taken andit is assumed thatthe observations are independent and identicallynormally distributed. The es-timated index Ĉp(u, v) is calculatedbased on the maximum likelihood estimatesof the mean and varianceof the sample for given values of u and v.

    The distribution of Ĉp(u, v) is skewed and depends on thevalues of mean valueµ and variance σ of the process. To account forthis, the index is transformedto obtain a variable that isapproximately standard normally distributed usinga two-parameterlogarithm transformation of the form

    Yi = ai + bi log(Ĉp(u, v)i),

    17

  • where

    ai = −bi log

    E(Ĉp(u, v)i)( V (Ĉp(u, v)i)E2(Ĉp(u, v)i)

    + 1

    )−1/2bi =

    (log

    (V (Ĉp(u, v)i)

    E2(Ĉp(u, v)i)+ 1

    ))−1/2.

    Under the assumption of a constant capability index Cp(u, v) =k0, a fixedset of parameters a and b are determined based oncalculation of E(Ĉp(u, v))and V (Ĉp(u, v)) using µ = 0 and σ =1/(3k0). The transformed variable isthen monitored with an EWMAchart as described in e.g. Montgomery (2005).Assuming a knownexpected value and standard deviation of Yi, the controllimits aregiven by

    LCL = E(Yi)−K(

    λ

    2− λ

    )1/2√V (Yi),

    UCL = E(Yi) +K(

    λ

    2− λ

    )1/2√V (Yi),

    where λ and K are constants, 0 ≤ λ ≤ 1 and K is usually around3.

    The properties of the method is studied in Castagliola andVännman (2008).They consider different indices and sample sizes inthe range 7 to 60 under theassumption of a decrease in thecapability index from 4/3 to 1 and an increasefrom 4/3 to 5/6. Itis assumed that the process remains at the new capabilitylevel.

    18

  • Chapter 2

    Elaboration on the Papers

    2.1 Testing for Sphericity in Phase I ControlChartApplications

    The paper Windfeldt and Bisgaard (2009) is written forpractitioners as a toolto test the assumption behind the x̄ and Rcharts of variance homogeneity andindependence of observationswithin subgroups. In the paper it is suggested touse the test fordistributional sphericity but due to the target audience thebeingpractitioners, the theory behind this test is not described indetail. Below wegive a more detailed derivation of the test forsphericity. But first we describethe motivation and statisticalinterest for the paper.

    2.1.1 Motivation and Statistical Interest

    Classical control charts like the x̄ and R charts are widelyknown and conceptualsimple and therefore the natural first choicein a practical setting. Methodsfor setting up these charts based ona number of samples from the processhave traditional been keptsimple and consist mainly of calculating the controllimits andplotting the points to see if they are inside these limits. Buttodaycomputers and statistical software are an integrated part ofthe production

    19

  • environment and it is therefore possible to introduce moreadvanced methodsduring Phase I. These methods can help toinvestigate the necessary assumptionswhen using control charts andhelp gaining a better process understanding.

    The effect of non-normality has been investigated by severalauthors, see Mont-gomery (2005) for an overview, and the x̄ chartis reasonable robust to depar-tures from normality. Tools forchecking the assumption of normality like thenormal quantile plotis readily available. The assumption of temporal inde-pendence hasalso received a fair amount of attention in recent year andtoolslike the autocorrelation function are available for checkingthis assumption, seeMontgomery (2005).

    The assumption of independence and variance homogeneity withinsamples hasreceived much less attention and it is this kind ofviolation of the assumptionswe are concerned with in Windfeldt andBisgaard (2009). A necessary conditionfor using the suggestedmethod is that the observations within the sample hasa consistentorder that is the same across samples. It is the purpose of thex̄chart to catch excess variability between samples and the purposeof the R chartto catch excess variability within samples. Neitherof these charts are designedto detect correlation and varianceinhomogeneity within the samples. If theassumption of independenceand variance homogeneity is violated the controlchart may likelyperform poorly in the subsequent monitoring phase, eithercausingexcessive false alarms, not sounding valid alarms, or reactingslowly toout-of-control situations. In other words, the assumedproperties of the controlchart may be misleading since they arederived from the independence, equalvariance and normaldistribution assumptions.

    The classical Shewhart charts are widely used in industry andNovo Nordisk A/Sis no exception. The knowledge whether theprocesses violated the assumptionof independence and variancehomogeneity within samples is therefore of in-terest to NovoNordisk A/S to be able to use the right monitoring scheme fortheirprocesses. The production at Novo Nordisk A/S range overfundamentallydifferent types of processes. In the productioninvolving devices, the qualitycharacteristics are of mainlygeometric dimensions. Here the knowledge thatthe assumption ofindependence and variance homogeneity within samples isviolatedwould probably lead to an investigation and a corrective action.Inthe production of pharmaceuticals the quality characteristicshave a differentnature and the presence of correlation within thesamples could very well be aninherited part of the process and inthis case another monitoring scheme than

    20

  • the traditional Shewhart chart might therefore be relevant.

    2.1.2 The Test for Sphericity

    Let X1, X2, . . . Xm be independent multivariate normallydistributed with meanµ and covariance Σ, i.e. Xi ∼ Nm(µ,Σ). We wishto test the hypothesis

    H0 : Σ = λIn against H1 : Σ 6= λIn,

    where λ > 0. As described in Windfeldt and Bisgaard (2009),the likelihoodratio test, also denoted the test for sphericity,rejects H0 at a significance levelof α if

    W =detS(1n trS

    )n ≤ kα,

    where S is the sample covariance matrix and kα is chosen so thesignificance isα.

    To derive the likelihood ratio statistic we consider thelikelihood function

    L(µ,Σ) = (2π)−nm/2(detΣ)−1/2 exp

    (−1

    2

    m∑i=1

    (Xi − µ)tΣ−1(Xi − µ)

    ). (2.1.1)

    21

  • We have thatm∑i=1

    (Xi − µ)tΣ−1(Xi − µ)

    =m∑i=1

    (Xi − X̄)tΣ−1(Xi − X̄)

    +m∑i=1

    (X̄ − µ)tΣ−1(X̄ − µ) + 2m∑i=1

    (X̄ − µ)tΣ−1(Xi − X̄)

    =m∑i=1

    (Xi − X̄)tΣ−1(Xi − X̄) +m(X̄ − µ)tΣ−1(X̄ − µ)

    =m∑i=1

    tr((Xi − X̄)tΣ−1(Xi − X̄)

    )+m(X̄ − µ)tΣ−1(X̄ − µ)

    =m∑i=1

    tr(Σ−1(Xi − X̄)(Xi − X̄)t

    )+m(X̄ − µ)tΣ−1(X̄ − µ)

    = tr(Σ−1A

    )+m(X̄ − µ)tΣ−1(X̄ − µ),

    where A = (m− 1)S. We can therefore rewrite (2.1.1) as

    L(µ,Σ) = (2π)−nm/2(detΣ)−1/2

    · exp(

    tr(−1

    2Σ−1A

    )− m

    2(X̄ − µ)tΣ−1(X̄ − µ)

    ).

    The likelihood ratio statistic is given by

    Λ =supµ∈Rn, λ>0 L(µ, λIn)supµ∈Rn,Σ>0 L(µ,Σ)

    .

    The maximum for the denominator is obtained when the parametersequal themaximum likelihood estimates, i.e.

    µ̂ =1m

    m∑i=1

    Xi and Σ̂ =1m

    m∑i=1

    (Xi − µ̂) (Xi − µ̂)t =1mA.

    By inserting this in the denominator we get

    supµ∈RnΣ>0

    L(µ,Σ) = (2π)−nm/2mnm/2 exp (−nm/2) (detA)−m/2 .

    22

  • In the numerator we get

    supµ∈Rnλ>0

    L(µ, λIn) = (2π)−nm/2

    · supµ∈Rnλ>0

    {λ−nm/2 exp

    (tr(− 1

    2λA

    )+(−m

    2λ(X̄ − µ)t(X̄ − µ)

    ))}

    = (2π)−nm/2 supλ>0

    λ−nm/2{

    exp(

    tr(− 1

    2λA

    ))}= (2π)−nm/2

    (1nm

    trA)−nm/2

    exp (−nm/2) .

    To see the last equality note that the derivative of

    g(λ) = λ−nm/2 exp(

    tr(− 1

    2λA

    ))is

    g′(λ) = −12

    exp(−1

    2trAλ

    )λ−

    12nm−2 (λnm− trA) .

    The likelihood ratio statistic is therefore

    Λ =

    (1nm trA

    )−nm/2 exp (−nm/2)mnm/2 exp (−nm/2) (detA)−m/2

    =

    (detA(1n trA

    )n)m/2

    .

    The likelihood ratio test rejects the hypothesisH0 if thelikelihood ratio statisticsis small or equivalently if

    W = Λ2/m =detA(1n trA

    )n = detS( 1n trS

    )nis small.

    To determine kα we need to know the distribution of W under H0.As notedin Muirhead (1982) the exact distribution of W is extremelycomplicated. Anexpression for the exact distribution under H0 canbe found in Nagarsenker andPillai (1973), who also provide exact 1%and 5% quantiles for W .

    23

  • According to the general likelihood theory the asymptoticdistribution of−2 log Λ is χ2f , where the degrees of freedom f isthe number of independentparameters in the full parameter spaceminus the number of independent pa-rameters under the nullhypothesis, hence, f = n(n + 1)/2 − 1. As describedin Muirhead(1982) a better approximation can be found by using the work ofBox(1949) and is given by

    P

    (−2 m− 1

    mρ log Λ ≤ X

    )= P

    (χ2f ≤ X

    )+ ω2

    (P(χ2f+4 ≤ X

    )− P

    (χ2f ≤ X

    )),

    where

    ω2 =(n− 1)(n− 2)(n+ 2)(2n3 + 6n2 + 3n+ 2)

    288n2(m− 1)2ρ2,

    ρ = 1− 2n2 + n+ 2

    6n(m− 1).

    The exact 5% quantiles for W and the corresponding values fromthe generelχ2 approximation and the Box aproximation are comparedfor n = 4, 5, and 6in Figures 2.1 to 2.3. We can see that the Boxapproximation works well foreven moderate sample sizes, thedifferences is in the size of 10−4. The generalapproximation is oflimited use with the small sample sizes, m ≥ 25, in theapplicationsuggested in Windfeldt and Bisgaard (2009). As can be seen fromthetabulated values in Nagarsenker and Pillai (1973), the Boxapproximationis best for small values of n.

    24

  • 10 20 30 40 50 60 70 80 90 100

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    Subgroup Size 4

    Number of subgroups

    Low

    er 5

    per

    cent

    age

    poin

    ts fo

    r W

    Figure 2.1: Graph of the 5% quantiles for W for n = 4. The solidline is theexact values while the dotted line is the generel χ2approximation. The boxapproximation is so close to the exact valuesthat is it not visible.

    10 20 30 40 50 60 70 80 90 100

    0.05

    0.15

    0.25

    0.35

    0.45

    0.55

    0.65

    0.75

    Subgroup Size 5

    Number of subgroups

    Low

    er 5

    per

    cent

    age

    poin

    ts fo

    r W

    Figure 2.2: Graph of the 5% quantiles for W for n = 5. The solidline is theexact values while the dotted line is the generel χ2approximation. The boxapproximation is so close to the exact valuesthat is it not visible.

    25

  • 10 20 30 40 50 60 70 80 90 100

    0.05

    0.15

    0.25

    0.35

    0.45

    0.55

    0.65

    Subgroup Size 6

    Number of subgroups

    Low

    er 5

    per

    cent

    age

    poin

    ts fo

    r W

    Figure 2.3: Graph of the 5% quantiles for W for n = 6. The solidline is theexact values while the dotted line is the generel χ2approximation. The boxapproximation is so close to the exact valuesthat is it not visible.

    2.2 Using Predictive Risk for Process Control

    In this section we elaborate on different issues regarding themethod for processmonitoring presented in Plante and Windfeldt(2009). We begin by brieflydescribing the method. The reader isreferred to the paper for further details.Then we describe themotivation behind the method and its statistical interest.Then wesuggest ways to address the uncertainty of the estimate. Apracticalapproach for setting up the chart is considered in Section2.2.4. Issues regardingthe sliding window approach used in themethod is described in Section 2.2.5,and finally in Section 2.2.6,we relate the method to alternative methods.

    2.2.1 The Method

    Let X1, X2, . . . be a possibly multivariate characteristic thatwe wish to monitorfor quality with specifications prescribing thatthe values of X must be in theset S. As described in the paperPlante and Windfeldt (2009) we suggest to usea moving window ofdata to fit a parametric statistical model that estimates

    26

  • the probability that the next produced item fails to meet thespecifications. Ifthe estimated probability exceeds a predeterminedthreshold the chart shouldsignal. More specifically we suggest thatbefore the tth observation is collectedwe use a window of n datapoints Xt−n, . . . , Xt−1 to infer the distribution F̂tfor the nextitem to be produced. Using the inferred distribution we estimatetheprobability that the next item will be outside the specifications,that isP (Xt /∈ S). In the paper we suggest to use the maximumlikelihood estimateθ̂t based on the data in the window to estimateFt. We monitor the processat time t by evaluating 1 −

    ∫S dFt(x|θ̂t) and comparing it to a predetermined

    threshold α.

    Further we suggest to plot the estimates of the parameters fromthe statisticalmodel to gain valuable insight in the process andaid in failure investigations.

    2.2.2 Motivation and Statistical Interest

    The motivation behind the method is found in the productionenvironment thatwas the origin of my research. For a majority ofthe processes in this produc-tion, the sampling scheme is 100%inspection. This high frequent measuring hashelped the engineers toget a very good understanding of the processes and thefactors thatinfluence them. So far the majority of the processes has beenmoni-tored on-line by monitoring the mean and variance using mainlythe Shewhart x̄and R charts. Some modifications to the standardlimits of the x̄ chart has beenmade in some cases to make themoperational. Various improvements have beenconducted and themajority of the processes now produce very few items that donotmeet the specifications. From the process engineers’ perspectivemany of theprocesses are now stable in the sense that the causes ofthe remaining variationhave been determined and deemed eitherimpossible to remove or too resourcedemanding to remove from aneconomic and quality perspective. The processes,however, are not instatistical control. Using the control charts applied so farresultsin a lot of false alarms which is only natural since the processesdo notlive up to the stability assumptions behind the charts.Furthermore, given thehigh number of processes monitored, theprocess engineers would like to relaxthe surveillance on theprocesses with high performance and focus more on theones that needimprovement. The engineers have a high degree of processun-derstanding and they would like to ”transfer” some of thisunderstanding intothe charts, and at the same time, keep the chartssimple for the operator. Since

    27

  • all items produced are measured there is no risk of accepting anitem that doesnot meet the specifications or rejecting an item thatdo meet specifications.

    The majority of the literature on control charts for monitoringprocesses is aboutdetecting special causes and maintainingstability. As described by Box andPaniagua-Quinones (2007),detection of a special cause can have three desirableresults:

    • The cause can be identified and eliminated,

    • it may be possible to improve the process by finding the bestlevel for theidentified cause,

    • the variance of the process can be reduced.

    It is well recognized that using control charts can help improvea process andgain process knowledge. It is also generally acceptedthat when the processhas been improved so it has a high performanceit can be useful to relax thesurveillance by using an acceptancechart and thereby allowing the process tomove. As described inSection 1.4, this will allow the mean of the process tovary as longas the fraction of nonconformities is acceptable.

    To use the acceptance chart the process has to be normallydistributed withconstant variance and a sample is assumed toconsist of independent identically(normally)1 distributedobservations. Similar assumptions apply when monitor-ing the mostwell-known capability indices except the assumption ofconstantvariance is relaxed. Traditionally the sampling from aprocess was small sam-ples collected sparsely in time, making theassumptions behind the acceptancechart reasonable in manyapplications. Today many production environmentshave changed tohigh frequent measurements close together in space and time.Notthat the traditional type of sampling does not still exist – forexample inchemical laboratories where measuring can be very timeconsuming – but thisnew type of sampling is becoming more and morewidespread. This does notonly challenge the assumptions of theabove mentioned methods but also offersan opportunity to gain andutilize valuable insight in the process and to reacton differentthings than before. The method presented in Plante and Wind-feldt(2009) is a flexible method for monitoring a wide range ofprocesses. It is

    1An exception to the normality assumption is considered in Chouet al. (2005).

    28

  • designed for a high frequent measuring environment and allowssmall process in-stabilities with no practical importance. Furtherit allows the knowledge gainedby the process engineer throughexploratory analysis and the use of regular con-trol charts, to beutilized in a statistical model of the process. The complexityishidden for the operator who only has to consider one type ofchart no matter thenature of the process, which can be an advantagein a production environmentwith many processes. The processengineer can at the same time gain valuableinsight in the processby utilizing the diagnostic plots.

    2.2.3 Uncertainty

    The method in its current form does not take into considerationthe uncertaintyof the estimated parameters. One way to account forthis is using bootstrapto provide a confidence interval for P̂ (Xt/∈ S) rather than a point estimate.Another way would be tocompensate for the uncertainty by an appropriatechoice ofthreshold. Since the sample size is constant the sampling errorwouldbe more or less constant.

    The uncertainty under a given model can be found by simulation.As an examplewe have used simulation to determine the uncertaintyin the simple case of anormal distribution for different samplesizes. The specification limits are givenby LSL = −1 and USL = 1.The mean value µ is equal to 0 and the standarddeviation σ is equalto 1/6. The result of the simulation is depicted in Figure 2.4.

    2.2.4 The Setup Phase

    The setup phase of the chart is not described in much detail inPlante andWindfeldt (2009). The case study section in the paper ismeant to give an un-derstanding on how such a setup could be done.The setup phase will naturallyrequire a good cooperation between astatistician and the process engineers. Apractical approach whichis based on a control chart setup approach suggestedby Roes andDoes (1995) is given below.

    1. Monitor the process intensively for a period of time.

    29

  • 1e−16 1e−13 1e−10 1e−07 1e−04

    0.6

    0.8

    1.0

    1.2

    1.4

    simAlpha[, l]

    1e−16 1e−13 1e−10 1e−07 1e−04

    n 20

    1e−16 1e−13 1e−10 1e−07

    0.6

    0.8

    1.0

    1.2

    1.4

    simAlpha[, l]

    1e−16 1e−13 1e−10 1e−07

    n 50

    1e−16 1e−13 1e−10 1e−07

    0.6

    0.8

    1.0

    1.2

    1.4

    simAlpha[, l]

    1e−16 1e−13 1e−10 1e−07

    ● ●

    n 100

    1e−12 1e−10 1e−08

    0.6

    0.8

    1.0

    1.2

    1.4

    simAlpha[, l]

    1e−12 1e−10 1e−08

    ● ●

    n 250

    1e−11 5e−11 1e−10 5e−10 1e−09 5e−09 1e−08 5e−08

    0.6

    0.8

    1.0

    1.2

    1.4

    1e−11 5e−11 1e−10 5e−10 1e−09 5e−09 1e−08 5e−08

    ● ●

    n 500

    Figure 2.4: Uncertainty of P̂ (Xt /∈ S) with a sample size of n= 20, 50, 100, 250,and 500. The model is a normal distribution withmean µ = 0 and standarddeviation σ = 1/6. The specification limitsare LSL = −1 and USL = 1 whichresults in P (Xt /∈ S) = 1.97−9.

    30

  • 2. Construct an appropriate model based on the nature of theprocess andsupported by the data.

    3. Determine which sources of variation that are removable andwhich arenot based on practical, quality, and economicalconsiderations. Adjust themodel to account for any optimizationsmade to the process.

    4. Determine which parameters it is necessary to estimate andwhich canbe assumed to be fixed. Determine a good estimate for theparametersthat should be fixed. Consider if there should be adiagnostic chart forthe fixed parameters or another type of checkto assure that they remainconstant.

    5. Determine an appropriate threshold for the probability of thenext itemnot meeting specifications. This could be based onhistorical performanceof the process or other requirements.

    6. Determine an appropriate window size under consideration ofthe un-certainty of the estimated probability under the model.Consider if thethreshold should be adjusted to account for theuncertainty.

    7. Setup the chart for risk and the diagnostic plots. It isrecommended touse the diagnostic charts to gain further insight inthe process and assistin failure investigations.

    A high performance process has often been monitored and improvedby usingregular control charts before it is decide to relax thesurveillance and the processwill therefore be well-known. This willsignificantly reduce the work required inthe setup phase.

    2.2.5 Issues Regarding the Sliding Window Approach

    In Plante and Windfeldt (2009) it is suggested to use a slidingwindow approach.The approach means that when an abrupt change occurthe data window willat some point include observations from bothbefore and after the change. Evenif one decide to use disjoint datawindows the large window size would makethis situation likely. Whenthe window contains observations from both beforeand after thechange the estimated parameters will not necessarily reflecttheprocess after the change. There can be two possible problemswith this. An

    31

  • acceptable change causes a signal because the parameterestimates gives a falsehigh probability of the next item beingoutside specifications. In the processused in the case study thissituation occurs after periods of stand still becausethe machinecools down. A way to handle this situation is to flush thewindowand restart after longer stops.

    The second problem is that an unacceptable change can fail tocause a signalbecause the parameters estimates gives a false lowprobability of the next itembeing outside specification. Dependingon the process it can take at worst casethe size of the windowbefore the chart signals. If the process is so bad thatthe measureditems are outside specifications it would be natural to haveamechanism that would stop the process immediately. Even if thechart does notsignal the change will be visible both as an increaseP̂ (Xt ∈ S) and a change inthe diagnostic plots as the data windowis filled with observations from after thechange. Further the highfrequent measuring that is required with this methodwould probablyalso mean that a window size of items is only a small percentageofa days production. In the process in the case study, theobservations comein every three seconds and a normal daysproduction would be around 10000items. So given the worst casescenario, it would take approximately two and ahalf minute todetect the change and 50 items correspond to approximately 0.5% ofa days production.

    2.2.6 Alternative Methods

    Alternative methods for monitoring high performance process aredescribed inSection 1.4. The method suggested in Plante andWindfeldt (2009) is onlyapplicable in a high frequent measuringenvironment so we will restrict ourattention to such anenvironment.

    Whether to use capability indices or the fraction ofnonconformities as a measureof process performance is a decisionthat depends on the given situation. Themost well-known indicesrely on the assumption of normality and independencebetweenmeasurements but even when these assumptions are meet,monitoringcapability indices are generally not equivalent tomonitoring the fraction ofnonconformities. In many industrialsettings the process performance of interestis the fraction ofnonconformities. Traditionally this measure has been difficultandtime consuming to calculate compared to a capability index. Todaythewidespread use of computers and statistical software has removedthis difficulty.

    32

  • The values of the capability indices is though still moreintuitive to many thanthe value of the fraction of nonconformities.Even though the scale of capabilityindices seems more intuitive andeasier to communicate there is still an issueof ambiguity of theindex values that is caused by the confounding ofdifferentcharacteristics of a process i.e. mean, standarddeviation, closeness to targetetc. Using the capability index Cpmas suggested by Spiring (1991) has theadvantage of being able toconsider a target value. But the properties of themethod in thissituation seems to need elaboration.

    The acceptance chart introduced by Freund (1957) is designed tomonitor themean of a univariate process assuming that the processvariance remains con-stant. It is further assumed that theobservations within subgroups and betweensubgroups are independent.In this setting the theory behind the acceptanceis simple andproperties of the method, in terms of type I and type II error,iswell-known. Some generalizations have appeared to some types ofunivariatenon-normal processes and bivariate normal processes,still under the assump-tion of constant process variance andindependence of observations. The methodconsidered in Spiring(1991) for monitoring capability indices depends on thesameassumptions as the acceptance chart, except the assumption ofconstantvariance is relaxed. No generalization of this approach tonon-standard pro-cesses has seemed to appear and it seems thatgeneralizing this approach toprocesses other than the univariatenormal with independent observations doesnot seem straightforward.Applying the method in Plante and Windfeldt (2009)to non-standardand/or multivariate processes is straightforward. So in termsofgeneralization, the method suggested in Plante and Windfeldt (2009)has anadvantage. But there is still work to be done on exploringthe properties of themethod including the issue of uncertainty ofthe estimates.

    33

  • 2.3 Assessing the Impact of Missing Values onQuality Measures ofan Industrial Process

    The paper Windfeldt and Hartvig (2010) addresses a problem ofmissing valuesintroduced by an advanced measuring system and thesubsequent data handling.The primary tool in our solution to theproblem is the EM algorithm. Below weformulate the EM algorithm ingeneral terms and show an essential monotonicityproperty of thealgorithm. We then consider the EM algorithm under the modelofgrouped normal data which is the model used in the paper. But webegin bydescribing the motivation behind the paper and itsstatistical interest.

    2.3.1 Motivation and Statistical Interest

    The motivation to the paper is a problem of missing valuesdiscovered during aninvestigation on how to improve the processcontrol system of the production.To ensure the quality of anassembly process the height of the items after assem-bly aremeasured. The resolution of the measurement is three digits butfrom avisual display of the data it were discovered that not allvalues was represented.This was discussed among the productionengineers and it was concluded thatthe measurements was transformedduring the measuring process leading to anunintentional grouping ofthe data. An investigation into the problem was initi-ated with thepurpose of finding the likely source of the problem anddeterminingwhether the quality measures of the process wereseriously affected.

    The paper Windfeldt and Hartvig (2010) is a case study of theproblem ofassessing the impact of the missing values on the qualitymeasures of the process.It describes the problem, explain how wesolved it and what the conclusion was.It does so in a way thatothers with a similar problem can see how we addressedthe issue anduse the software we developed. Since the problem we encounteredis ageneral problem related to advanced measuring and the subsequentdatahandling it is relevant in many other industrial settings.

    34

  • 2.3.2 The EM Algorithm

    The EM algorithm is an iterative approach for obtaining themaximum likeli-hood estimates in incomplete data problems. Thealgorithm utilizes the reducedcomplexity of the maximum likelihoodestimation for the complete data.

    Let Y be a random variable with density function gθ(y), where θ∈ Θ, whichrepresents the observed incomplete data y ∈ Y. Furtherlet X be a randomvariable with density function fθ(x) whichrepresents the unobservable completedata x ∈ X . The incomplete andcomplete data are connected by a one to manyfunction t : X → Y.Therefore

    gθ(y) =∫t−1(y)

    fθ(x)dx.

    The EM algorithm utilize the simplicity of the likelihood of thecomplete obser-vation X to find the maximum likelihood estimate ofthe incomplete observationY . Since the complete observation isunobservable, the expected complete log-likelihood function giventhe observed value y with respect to the current valueof θ ismaximized instead. More specifically, let θ0 be an initial value oftheparameter θ ∈ Θ. The (k + 1)th iteration of the EM algorithm, isthen

    E-step. Calculate Eθk (logLx(θ)|y).

    M-step. Find θk+1 ∈ Θ such that

    Eθk (logLx(θk+1)|y) ≥ Eθk (logLx(θ)|y) for all θ ∈ Θ.

    In case of convergence of the likelihood values {Ly(θk)}k∈N, thesteps are con-tinued until the difference Ly(θk+1)− Ly(θk) issmall.

    An essential property of the EM algorithm is that the incompleteloglikehoodfunction does not decrease after an iteration of the EMalgorithm, i.e.Ly(θk+1) ≥ Ly(θk). To see this we look at thedensity of the incomplete obser-vation

    gθ(y) =fθ(x)hθ(x|y)

    ,

    where hθ(x|y) is the conditional density of X given Y = y. Itfollows that

    logLy(θ) = logLx(θ)− log hθ(x|y). (2.3.1)

    35

  • Taking expectations with respect to the conditional distributiongiven y we get,

    logLy(θ) = Eθk (logLx(θ)|y)− Eθk (log hθ(x|y)|y) .

    Hence,

    logLy(θk+1)− logLy(θk) = (Eθk (logLx(θk+1)|y)− Eθk(logLx(θk)|y))−(Eθk

    (log hθk+1(x|y)|y

    )− Eθk (log hθk(x|y)|y)

    )The first difference is non-negative because of how θk+1 ischosen in the M-stepof the algorithm. For the second difference wehave, for any θ ∈ Θ, that

    Eθk (log hθ(x|y)|y)− Eθk (log hθk(x|y)|y) = Eθk (log hθ(x|y)−log hθk(x|y)|y)

    = Eθk

    (log(hθ(x|y)hθk(x|y)

    ∣∣∣∣ y))≤ log

    (Eθk

    (hθ(x|y)hθk(x|y)

    ∣∣∣∣ y))= log

    ∫hθ(x|y)dx

    = 0,

    where the inequality is a consequence of Jensen’s equality andthe concavity ofthe logarithm function.

    2.3.3 The EM Algorithm for Grouped Observations

    In this section we show how to use the EM algorithm on groupednormal obser-vations.

    Let X be a normally distributed random variable with mean µ andvariance σ2

    and assume that the sampling space is divided into disjointintervals Ii, wherei = 1, . . . , r. Assume that X1, . . . , Xn aren independent observations of Xbut that only the number ofobservations ni in Ii is known. The incompleteobservation y = (n1,. . . , nr) will therefore follow a multinomial distributionwith ndraws from r categories where the probability of being in categoryi is

    P (X ∈ Ii) =∫Ii

    1√2πσ2

    e−1

    2σ2(x−µ)2dx.

    36

  • The incomplete likelihood function is therefore

    LY (µ, σ2) ∝r∏i=1

    P (X ∈ Ii)ni .

    The loglikelihood function for the incomplete observations istherefore

    lY (µ, σ2) ∝r∑i=1

    ni logP (X ∈ Ii).

    The complete observation is x = (x1, ..., xn) and the likelihoodfunction for thecomplete observation is

    LX(µ, σ2) =n∏i=1

    1√2πσ2

    e−1

    2σ2(xi−µ)2

    The loglikelihood function for the complete observation istherefore

    lX(µ, σ2) = −n

    2log(2π)− n

    2log σ2 − 1

    2σ2

    n∑i=1

    (xi − µ)2 .

    The E-step of the (k + 1)th iteration consists of finding theexpected value ofthe complete loglikelihood function given y,i.e.

    Eµk,σ2k(lX(µ, σ)|y)

    = −n2

    log(2π)− n2

    log(σ2)− 12σ2

    r∑i=1

    niEµk,σ2k((X − µ)2|X ∈ Ii).

    We have

    Eµk,σ2k((X − µ)2|X ∈ Ii) = Eµk,σ2k ((X − µ) |X ∈ Ii)

    2 + Vµk,σ2k ((X − µ) |X ∈ Ii)

    = ([Xi]k − µ)2 + [X2i ]k − [Xi]2k,

    where

    [Xi]k = Eµk,σ2k (X|X ∈ Ii) ,

    [X2i ]k = Eµk,σ2k(X2|X ∈ Ii

    ).

    37

  • The expected value of the complete loglikelihood function giveny is therefore

    Eµk,σ2k

    (lX(µk, σ

    2k

    )|y)

    = −n2

    log(2π)− n2

    log(σ2)− 12σ2

    r∑i=1

    ni

    (([Xi]k − µ)2 + [X2i ]k − [Xi]2k

    ).

    (2.3.2)

    In the M-step in the (k + 1)th iteration we find µk+1 and σk+1by maximizing(2.3.2) with respect to µ and σ. This is equivalent tomaximizing the loglike-lihood function for X except for the extraterm [X2i ]k − [Xi]2k. This term isinvariant when maximizing withrespect to µ but should be taken into accountwhen maximizing withrespect to σ2. We therefore get

    µk+1 =1n

    r∑i=1

    ni[Xi]k,

    σk+1 =1n

    r∑i=1

    ni

    (([Xi]k − µk+1)2 + [Xi]2 − [Xi]2k

    ).

    38

  • Chapter 3

    Conclusion and FutureWork

    It has been inspiring to work closely together with processengineers and seeingtheir enthusiasm and dedication to processimprovement. I have learned thatthe interest for applyingstatistical methods for improving process performanceis readilypresent. But I have also learned that the modern productionenviron-ment with its high frequency measurements and advancedmeasuring techniquesis a challenge for the people working with SPCin practice. With the prolifera-tion of computers and statisticalsoftware in the production, I believe that thesame things that seema challenge has given an opportunity to gain valuableinsight intoour processes like never before. This makes it an important tasktoprovide the necessary tools for practitioners to investigate andunderstand theobservations they obtain.

    More specific direction for future work in relation to the workpresented in thethesis is: In connection with the processes withgrouped observations it couldbe relevant to consider using controlcharts specific for grouped observations,see Steiner et al. (1996)and Steiner (1998). The theoretical properties of themethodpresented in Plante and Windfeldt (2009) depends on the methodofprediction and the model of the process. Further exploration ofthese propertiesare needed. We have considered using the MLE forprediction of the density

    39

  • of the next observation, mainly because of the simplicity of theapproach. Asmentioned earlier this method does not take theuncertainty of the parameterestimates into consideration. Furtherwork on the ideas to handle uncertaintydescribed in Section 2.2.3is needed. It could also be relevant to consider otherways topredict. Furthermore it could be interesting to explore thepossibilitiesof doing a continuous check of the underlying modelassumptions by consideringthe residuals of the fitted model.

    40

  • II

    Extending Phase I

    41

  • Chapter 4

    Testing for Sphericity inPhase I Control ChartApplications

    When using x−R charts it is a crucial assumption that theobservations withinsamples are independent and have commonvariance. However, this assumptionis almost never checked. Wepropose to use the samples gathered during thephase I study and thetest for distributional sphericity, to check this assumption.Wesupply a graph of the exact percentage points for the distributionof the teststatistics. The test statistics can be computed withstandard statistical software.Together with the graph of the exactpercentage points, the test can easily beperformed during a phase Istudy. We illustrate the method with examples.

    43

  • 4.1 Introduction

    During phase I control chart studies – the retrospectiveanalysis of (variables)control chart data in preparation for theprospective use of the control chart– a number of samples from theprocess under study are collected. Typicallydata used in phase Ifor x−R charts consist of m ≥ 25 samples sampled overarepresentative period of time. Each sub-sample usually consist ofaround 5individual observations. Based on these m sub-samples,methods have been de-veloped for setting up x−R charts. Thesemethods consist mainly of computingthe control limits; see e.g.Montgomery (2005). Because these methods weredeveloped a long timeago before the proliferation of modern computers andstatisticalsoftware, an overriding concern was to keep them simple. However,intoday’s computing and software environment, what is consideredsimple hasfundamentally changed. With current statistical softwarewidely available in in-dustry, it is possible to introduce moreadvanced methods during phase I. Suchmethods can help qualityengineers gain information about the processes andallow for muchmore thorough investigations of the necessary assumptions thataremade when using control charts.

    Over the past few decades the assumption of temporalindependence in con-trol chart data has received a fair amount ofattention; see e.g. Montgomery(2005) for an overview. Simple toolssuch as the autocorrelation function andother time series methodsare readily available and frequently used for check-ing theassumption of temporal independence. However, the assumptionofindependence within sub-samples has received much less attention.A notableexception is Roes and Does (1995); see also the discussionin Sullivan et al.(1995). In Jensen et al. (2006) is provided athorough review of the literatureon the estimation of theparameters for control charts. It does discuss the effectofautocorrelation but does not provide information on theconsequences of cor-relation within sub-samples. In Bischak andTrietsch (2007) the estimation ofparameters for control charts isalso considered using the false alarm rate. Thismeasure is used toillustrate the danger of using to few subgroups for estimatingthecontrol limits. Neither the presence of autocorrelation orcorrelation withinsamples is considered.

    In this article we demonstrate a relatively simple tool fortesting the importantassumptions of independence and commonvariance within subgroups. Indepen-dence and common variance areimportant to assure that the control charts haveappropriate validalarm and false alarm rates, and hence the required average

    44

  • run length properties. With the increasing use of automaticsensor technologyin industry, which allow measuring qualitycharacteristics closer together in timeand space, it is more thanever important to assure that these assumptions areapproximatelyvalid.

    This paper is organized as follows: We begin by providing thenecessary back-ground and define the notation. Next we describe thetest for sphericity andpresent a graph of the exact percentagepoints for the distribution of the teststatistics. We thenillustrate the method with two examples. We finish withsomeconcluding remarks.

    4.2 Background, Definitions and Notation

    The standard procedure for setting up and using x−R charts isdescribed, forexample, in Chapter 5 of Montgomery (2005). We adoptthe notation and referto his book for details of this methodology.We will briefly describe the neces-sary assumptions when using x−Rcharts. The x−R charts are based on theassumption that the qualitycharacteristic one wishes to monitor is normallydistributed withmean µ and variance σ2. It is assumed that the observationswithinand between subgroups are independent. A subgroup will typicallycon-sist of around n = 5 observations. The x−R charts are based onthe assumptionthat this subgroup is a rational subgroup, meaningthat within this subgroupof observations, the only cause ofvariation is chance. In practice µ and σ2 areunknown and have to beestimated based on the samples available in phase I.Typically thereare m ≥ 25 subgroups available. These samples aretentativelyassumed to be in statistical control, meaning that weproceed entertaining thetentative assumption that the data areindependent and identically normallydistributed. The usual way ofchecking this assumption is to plot the values ofx and R for eachsubgroup on a chart with the initial trial control limits. Ifallpoints lies within the trial limits and there seem to be nosystematic behavior,then we cautiously proceed provisionallyassuming they are in statistical control.

    It is the purpose of the x chart to catch unusual variabilitybetween the sub-groups, and the purpose of the R chart is to catchexcess variability within thesubgroup. However, none of thesecontrol charts are designed to discover corre-lation or variancein-homogeneity among the observations within subgroups. Itis thiskind of violation of the assumptions we are concerned with in thisarticle.

    45

  • Of course, as Box famously stated in Box (1979) ”All models arewrong, but someare useful”. Furthermore, not all assumptions areequally important. Indeed,x−R charts are relatively robust to thenormality assumption. However, theassumption of independence andequal variance are typically more critical; seeAppendix 3A, pp.117–119 in Box et al. (2005). Thus, a major task duringphase I isto check that the important assumptions are not seriouslyviolated.Otherwise the control chart, when applied in thesubsequent phase II, maylikely perform poorly, either causingexcessive false alarms, not sounding validalarms, or reactingslowly to out-of-control situations. In other words, theassumedproperties of the control chart will be misleading, possiblyseriouslyso. Indeed, the operating characteristic (OC), or power,and the average runlength (ARL) are usually derived from theindependence, equal variance andnormal distributionassumptions.

    Mathematically we denote the observations in phase I as xij ,where i = 1, . . . ,mand j = 1, . . . , n. The index i counts thenumber of the rational subgroups,while the index j counts theobservations within subgroups. The entire data setavailable inphase I is denoted as X = [xij ], where i = 1, . . . ,m and j = 1,. . . , n,which is a m× n matrix. Table 4.1 exemplifies such a datamatrix.

    Subgroup S1 S2 S3 S4 S5 Subgroup S1 S2 S3 S4 S51 140 143 137 134135 15 144 142 143 135 1442 138 143 143 145 146 16 140 132 144 1451413 139 133 147 148 139 17 137 137 142 143 1414 143 141 137 138140 18 137 142 142 145 1435 142 142 145 135 136 19 142 142 143 1401356 136 144 143 136 137 20 136 142 140 139 1377 142 147 137 142138 21 142 144 140 138 1438 143 137 145 137 138 22 139 146 143 1401399 141 142 147 140 140 23 140 145 142 139 13710 142 137 145 140132 24 134 147 143 141 14211 137 147 142 137 135 25 138 145 141 13714112 137 146 142 142 140 26 140 145 143 144 13813 142 142 139 141142 27 145 145 137 138 14014 137 145 144 137 140

    Table 4.1: Data from p. 14 of Grant and Leavenworth (1989).There are 27subgroups and 5 observations in each subgroup, givingus a 27× 5 data matrix.

    We will further denote the ith row of the matrix X as a rowvector xTi . Whenthinking of xTi as a column vector, we write xi,which is the transpose of x

    Ti .

    46

  • With these notational conventions we see that the n observationsin the ithsubgroup, a vector xi, is assumed to have a n-dimensionalnormal (Gaussian)distribution. That is, xi ∼ Nn(µ,Σ), where themean µ is a vector of elementsthat are all equal, i.e. µ = (µ, . .. , µ), and the covariance Σ, is a diagonal matrixwith all diagonalelements equal, i.e. Σ = σ2I. It is this latter assumption thatisthe focus of the remainder of this article. When t

Top Articles
Latest Posts
Article information

Author: Pres. Lawanda Wiegand

Last Updated: 03/04/2023

Views: 6221

Rating: 4 / 5 (51 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Pres. Lawanda Wiegand

Birthday: 1993-01-10

Address: Suite 391 6963 Ullrich Shore, Bellefort, WI 01350-7893

Phone: +6806610432415

Job: Dynamic Manufacturing Assistant

Hobby: amateur radio, Taekwondo, Wood carving, Parkour, Skateboarding, Running, Rafting

Introduction: My name is Pres. Lawanda Wiegand, I am a inquisitive, helpful, glamorous, cheerful, open, clever, innocent person who loves writing and wants to share my knowledge and understanding with you.