Week 5: Assuring Quality in Operations

Week 5: Assuring Quality in Operations

“Introduction … Six Sigma Quality in Organizations … Total Quality Management: Philosophy, Tools & Techniques … Statistical Process Control … Establishing Quality in Service Organizations … Wrap-up”
(Source URL)

Summaries

  • Week 5: Assuring Quality in Operations > 5.0 Introduction > Recap
  • Week 5: Assuring Quality in Operations > 5.1 Six Sigma Quality in Organizations > The Need for Six Sigma Quality
  • Week 5: Assuring Quality in Operations > 5.1 Six Sigma Quality in Organizations > Measurement Metrics for Six Sigma Quality
  • Week 5: Assuring Quality in Operations > 5.1 Six Sigma Quality in Organizations > Six Sigma Methodology
  • Week 5: Assuring Quality in Operations > 5.2 Total Quality Management: Philosophy, Tools & Techniques > Tools for Quality Planning & Design (Continued)
  • Week 5: Assuring Quality in Operations > 5.3 Statistical Process Control > Statistical Process Control Fundamentals
  • Week 5: Assuring Quality in Operations > 5.3 Statistical Process Control > Setting up a Control Chart
  • Week 5: Assuring Quality in Operations > 5.3 Statistical Process Control > Types of Control Charts
  • Week 5: Assuring Quality in Operations > 5.3 Statistical Process Control > Using Control Charts for Operational Decisions
  • Week 5: Assuring Quality in Operations > 5.3 Statistical Process Control > Establishing Quality Standards in Supply Chain Partner Processes
  • Week 5: Assuring Quality in Operations > 5.4 Establishing Quality in Service Organizations > Establishing Quality in Service Settings
  • Week 5: Assuring Quality in Operations > 5.5 Week 5 Wrap-up > Summary

Week 5: Assuring Quality in Operations > 5.0 Introduction > Recap

  • Last 30 years has been a sort of a quality revolution.
  • The great rise of Japanese corporations; their business success to a large extent is on account of what they have done in the name of quality management.
  • It appears to me, to a large extent, quality is still an order winner, and there are some situations in which quality is an order qualifier.
  • We will try to understand what is this quality all about, what are some of the governing principles behind quality, how did some great minds shaped our thinking to what we call it as modern quality management practices.

Week 5: Assuring Quality in Operations > 5.1 Six Sigma Quality in Organizations > The Need for Six Sigma Quality

  • The moment we talk about quality, the word Six Sigma comes to our mind.
  • Number of progressive companies are working hard to build Six Sigma quality levels.
  • The Dabbawalas of Mumbai have baffled the business world with their Six Sigma quality standard in their operations involving delivering 200,000 tiffin boxes from home to work and again from workplace back to home, every day.
  • Generally, Six Sigma quality points to very high quality levels that defects become more or less a rarity in operations.
  • It also points to a disciplined way of handling issues in operations, a structured way of addressing quality issues, and a trajectory to an unambiguous destination in the quality management journey that every organization takes.
  • On the right side, what you see is what it means to actually have a 99% quality.
  • A 99% quality means every three days one guest checks out without paying the bill.
  • Talking about laundry, a 99% quality means every day 20 guests do not get back their right laundry.
  • The error rate in the business customer segment was 0.5%. In other words, we are talking about ninety-nine and a half percent quality.
  • The error rate in retail customers turned out to be 1.1%, which is nearly 99% quality.
  • Look at this example, we are talking about ninety-nine and a half percent to a 99% quality levels.
  • These two examples indeed show us that there is a strong case for quality levels which are much, much higher than the 99% and ninety-nine and a half percent that we are talking about.
  • Let’s look at some other argument in favor of why we should go for very high levels of quality.
  • Suppose, you have a better quality management system, what it really means is the organization will have a superior quality control.
  • Superior quality control itself will lead to fewer disruptions and smoother output that we are looking here.
  • Now what does superior quality control mean? Essentially it means three things: you will have fewer rework, you have a high quality finished goods, and, of course, you will have lesser indirect cost because you are not wasting time in inspection, in correcting, sorting it out, and so on.
  • Eventually all these will lead to what organizations are really asking for in terms of low inventories, lesser use of labor, indirect costs are coming down, and quality is going up.
  • Yesterday we were thinking that it is often uneconomical to make quality improvements, since it brings down productivity, increases cost and investment.
  • Today, a dominant thinking that productivity goes up, cost comes down as quality levels go up.
  • We need to look for new metrics for quality management.
  • Let’s identify some better metrics for quality management and that should drive our journey in quality.

Week 5: Assuring Quality in Operations > 5.1 Six Sigma Quality in Organizations > Measurement Metrics for Six Sigma Quality

  • If we take the example of manufacturing one possible measure is, what is called, Parts per Million defects rates.
  • If you take services, a measure is called Defects per Million Opportunities.
  • Essentially you count the number of defects and then if they are in certain parts per million, it is called PPM. That’s easily understandable.
  • Let’s now look at the measure defects per million opportunities.
  • If in a process, let us say, the number of opportunities for making a defect per unit of execution of that process, let’s call it as “k.” Let the number of units of observation of the process be “n.” Let me use “n” as the variable and the number of defects that occurred in that process during the observation is “d.” In other words, what we are saying is we observe “n” times, we actually found “d” defects.
  • If you put these three together, then we will be able to get the defects per million opportunity because this is the actual defects that we observed and this is the potential defects that could have happened in that process and multiplied by a million, you get defects per million opportunity.
  • In a certain season, let’s say, 1,250 guests are handled through this check-in process, and there were 357 defects observed during that period.
  • This is a total number, the denominator and multiplied by a million so that you get defects per million opportunities, which turns out to be 25,963.
  • Today’s quality management approach must be based on a few basic premises in order to achieve such parts per million defect levels.
  • The second premise, the system of quality that we need to put in place must be one of prevention and elimination and not detection and correction.
  • There is no way we are going to get to parts per million defect rates by continuously detecting errors and correcting errors.
  • For practical purposes we can go near-zero defects, but the performance standard is going to be zero defects.
  • Fourth premise, the responsibility for quality must lie primarily with those who produce and deliver products and services and not with quality control department who can measure and find something.
  • These four premises are very important for us to move on to parts per million defect levels.

Week 5: Assuring Quality in Operations > 5.1 Six Sigma Quality in Organizations > Six Sigma Methodology

  • What are those? One, a structured program for quality management and improvement.
  • Two, some facilitating mechanisms for the operations personnel to own, to solve, and to obliterate the quality problems.
  • Thirdly, there is a need for a good organization structure and mandate for quality improvement issues on a continuous basis.
  • These three are very important and precisely a Six Sigma program provides these enabling mechanisms for any organization.
  • Out of the analysis comes the Improvement that has to happen so that the quality levels improve.
  • Now that we know the goals, we know the scope, and so on, we need to identify specific variables to be measured, the type of measurement, the kind of data collection that we need, and what kind of insides that we want to synthesize.
  • That leads us to the next stage which is the improvement stage where we generate and validate these improvement alternatives, create the new process maps for the process, because now we need to put these things in place.
  • We need to develop a control plan, establish revise standard measures, operating procedures to maintain performance, and also, if required, develop relevant training plans to the people who need to get accustomed to a certain way of doing things so that their standards are maintained.
  • In order that the organization sustainably improves the quality to near-zero defect levels, what is also important is a good organization structure, mandate to make changes, ownership of the processes and results, and continuous and closer review of all these activities which are happening under the banner of quality improvement.
  • Second, you need team leaders and members because it is employee driven, so the team leader or the project leader and the members will comprise of the employees in the chosen work area.
  • This, for example, may include statistical tools, process design and analysis which the operating personal may or may not have that kind of an expertise, change management, small group Improvement, and use of quality controls tool for improvement.
  • Finally, you need a sponsor who is a member of the senior management and who oversees the overall progress and implementation.
  • In other words, the sponsor creates the mandate and the support required organization wide for this team to engage fruitfully in the six sigma improvement opportunity.
  • With this structure-organizational structure that we talk about, the methodology that we talked about-DMAIC, the mandate, and oversight in place, it’s not surprising that organizations make steady progress in their quality journey.
  • We shall first start with the basic philosophy behind quality management that we practice today and then take up other issues in the future videos.

Week 5: Assuring Quality in Operations > 5.2 Total Quality Management: Philosophy, Tools & Techniques > Tools for Quality Planning & Design (Continued)

  • In the first stage, the exercise is done to link customer needs to the design attributes, and once the design attributes are fixed, then a similar type of a house is constructed to link the design attributes to specific actions that firms can take.
  • What customer want is finally taken to the process plan through a four- stage process.
  • Let’s say we’ll take the first element and see how this house looks like.
  • It starts with customer requirements because, you know, quality begins with what the customer wants.
  • In our productivity module, we said if we don’t deliver what customer wants, it will be a productivity paradox.
  • How important is the customer? Some numbering systems are developed.
  • Then there are certain product characteristics which should be linked to the customer requirements.
  • Then what is the relationship between customer requirements and the product characteristics that is captured through a relationship matrix? That is the fourth element.
  • Now there are a few requirements of the customers-steaming hot, for example; easy to carry home, you know, lunch or breakfast; quick order processing; and so on.
  • There are some importance- steaming hot food is considered to be very, very important and so on.
  • How much time does it take to cook the food? That is another.
  • Whereas-let us take another example, quick order processing and what you see is there is a minus here instead of a double plus.
  • This means this particular attribute is negatively working against this requirement-quick order processing and time taken to cook the food.
  • If time taken to cook the food is much higher, quick order processing is not possible.
  • The order processing time is negatively correlated to the number of service counters during peak time.
  • If the number is less, order processing time will be more and vice versa.
  • You have where our own company is there, where the competitor A is there, where competitor B is there, against each of these customer attributes.
  • This importance is for the row, this importance that you see is for the customer, and this importance weighing that you are seeing is this process towards that.
  • So far we have seen different types of tools used for exploring quality-related issues in an organization and make steady improvements.
  • We saw some tools for quality at operations, and we also saw some tools for strategic planning, building it into design of products and services and so on.
  • Modern quality management centrally rests on principles of quality, and knowledge of statistical tools is inevitable in creating high levels of quality.

Week 5: Assuring Quality in Operations > 5.3 Statistical Process Control > Statistical Process Control Fundamentals

  • It helps operationalize some decisions and keep performance and outcome within limits.
  • Statistical process control is a collective set of tools and techniques used to develop a quality assurance system that enables one to make meaningful sense of these variations.
  • One is called common causes and another is called assignable causes.
  • Changes in the operating condition of an equipment, changes introduced in the standard operating procedure-all these could cause a certain quality problem which can be traced, inquired, and perhaps corrected also.
  • Let us look at some terminologies related to statistical process control.
  • We call it as USL and a lower specification limit, which we call it as LSL. So, the difference between the USL and the LSL is the desired tolerance.
  • Here is an example of a customer checkout time in a five star hotel, and it appears that it should be in the range of 90 plus or minus 20 seconds.
  • Since there is a plus or minus 20 involved, the upper specification limit is going to be 110 seconds, which is 90 plus 20 and the lower specification limit is 70 seconds, which happens to be 90 minus 20.
  • Going by the same logic, the target for this pen diameter is 8 millimeter and the upper specification limit is 8.5 because there is a tolerance of plus 0.5 allowed, and the lower specification is 7.5, again, because 8 minus 0.5 and the desired tolerance is going to be 1 millimeter which is 8.5 and 7.5.
  • The second set of terminology belongs to the process.
  • While so far what you have seen is what the customer wants, we need to know what the process is delivering.
  • If you take all those and measure it, there will be a process average which is the center of the process.
  • Then there is an upper control limit and a lower control limit, and the difference between the upper control limit and lower control limit is called the spread of the process.
  • At the outset the questions that we need to address are what is the characteristic in a process that needs to be measured for the purpose of quality control and how should we measure for the purpose of analysis.
  • Once we know these two, then we can look at the process and find out how to control the process.
  • Errors in processed documents could be another or certain conformance to waiting time or lead time.
  • There could be several such examples, and we need to exercise a certain sense of judgement and discretion to identify those characteristics which are important, and therefore we need to closely monitor it and make sure things work.
  • The time taken to complete certain parts of the business process could be an important characteristic.
  • The settlement of claims in an insurance firm could be a measure or the loan approval in a financial institution or a patient admission process in a hospital.
  • Since the patient is alright with 20 plus six minutes, one possibility is to count the number of occasions patients are indeed admitted after 26 minutes because it becomes a defect in the process.
  • This means this particular process, patient admission process, delivers a proportion of defects of 7%. So, that is one way to measure.
  • Another way to measure could be to actually make a detailed measurement of the admission times in all these hundred cases, which means you have a stop clock and then find out when the first patient was admitted, perhaps it maybe 24.95 minutes, maybe the second patient was admitted in 21.87 minutes, the third one, let’s say, in about 25.45 minutes, the fourth one in about 19.75 minutes, and so on.
  • In the earlier case, what we have is a number called 7% defect, and in this case, we actually have a set of numbers.
  • Patient admission time has become a variable, and we have got several numbers for that variable.
  • Remember, it will reveal very little information about the process because all that we knew was seven patients were admitted beyond 26 minutes.
  • This is one method called attribute based because we are using attributes called good and bad to assess quality.
  • In the second method, it’s called variable based because we identified patient admission time as a variable and made a detailed observation of the characteristic.
  • Measurement will be expensive because it’s more time consuming, more detail, but it will provide us a wealth of information about the process.
  • Let’s now proceed to find out how to use statistics to setup the process controls.

Week 5: Assuring Quality in Operations > 5.3 Statistical Process Control > Setting up a Control Chart

  • Statistical process control, SPC, begins with identifying what attribute or characteristic needs to be closely monitored for quality.
  • Once we are clear about the way we want to measure, then we can choose appropriate procedure for setting up the process controls.
  • Before we see the specifics of each of these, let’s get to know what exactly is the process of setting up a control chart.
  • This could be an indication that the process may be out of control.
  • In other words, what are we doing? We try to find the center of the process, identify the standard deviation, then identify the plus or minus three sigma of the process and call them as control limits.
  • The plus three sigma is called upper control limit, the minus three sigma is called lower control limit, and you plot the data and see if the process is in control.
  • What we mean is a state of statistical control because we are using statistics to make a judgement of control.
  • Let’s look at a generalized representation of a control chart to understand this.
  • This is the upper control limit because of plus three sigma that’s why we call it as a upper control limit, and this is called the lower control limit because it is minus three sigma.
  • We are saying the process is in a state of statistical control.
  • On the other hand, if you look at this chart, what we notice is there is one point which is outside of the upper control limit, and this makes us suspect that the changes are not due to random events.
  • We come to conclusion that the process is perhaps not in a state of statistical control.
  • That is what a statistical process control mechanism tries to help us with.
  • We shall see some of the steps in the statistical process control, setting up a control chart.
  • The first step is to choose the characteristic that you want to measure for process control.
  • Then the third step is to choose the control chart, appropriate control chart.
  • The next step is to collect data and establish control limits using statistics.
  • Analyze meaning is the system in control, is there a problem, and all those form the last step.
  • In the next video, we shall take two examples to demonstrate these steps for a attribute-type and variable-type control charts, respectively.

Week 5: Assuring Quality in Operations > 5.3 Statistical Process Control > Types of Control Charts

  • We shall now see how exactly to set up a control chart.
  • Control charts, as we have been discussing, are used for attribute-based measures and variable-based measures.
  • The basic difference is not in the concept of how to set up a control chart but in the way we extract the parameters required for setting up the control chart.
  • For a variable-based control chart, we use what is called x bar and r charts.
  • For attribute, we either use a p chart or a c chart.
  • Let’s now look at x bar and r chart and see how to set up control charts for that.
  • Step 1 is to choose the measurement characteristic, and in this example, we are taking the diameter of a certain cylindrical component manufactured by auto component manufacturer, let us say.
  • We measure in terms of centimeters-that is the measurement characteristic for quality control.
  • Step 2 is the measurement method, and we would like to do the actual measurement so that that makes it a variable based, which means you take the exact diameter and take that number.
  • Because it is variable based, the control chart has to be x bar or r chart.
  • Step 4 is to decide on a sampling plan and then collect data and establish the control limits.
  • Step 6 in the process is to plot the data and analyze it in terms of whether the process is in control or not.
  • We sample every 20 minutes from the process, which is manufacturing these cylinders.
  • Therefore the sample size is five, and the number of samples we take is 15.
  • Each time there are 5 samples which are being collected.
  • You know, you have for example, six set of sample-12.45, 12.37, 12.44, 12.38, and 12.41 are the actual measurements of five consecutive samples taken at that instant.
  • What are the things that we are doing? We are taking the average of all these; we call it as x bar.
  • The range is the difference between the smallest value and the highest value.
  • Average is, of course, the average of the five values, that’s how you get.
  • R bar is the average of all ranges and x double bar is the average of all x bars.
  • Now we have extracted two important parameters from here, which is x double bar and r bar, which is required for setting up the control chart.
  • Now, how do we set the control chart? There are standard tables available from which you can directly read numbers to find your upper control limit and lower control limit.
  • That means in order to set up the control charts, we have to read the A2 values which is 0.577, D3 value which happens to be 0, and D4 value which happens to be 2.114.
  • For our sample size, we get the A2, D3, and D4 values.
  • Once we have these three values, the control chart is a fairly straightforward procedure.
  • The upper control and lower control for x bar charts are computed with x double bar plus A2 times r bar or x double bar minus A2 times r bar, which gives you the upper control and the lower control for the x bar chart.
  • For the r chart, the upper control is given by D4 times r and lower control is given by D3 times r. Since we know A2, D4, and D3, now we are all set to set up the control chart.
  • Let’s talk about the x bar chart first, and we want to know the UCL.
  • Remember, x double bar is 12.417, which we saw in the previous one; r bar is 0.119.
  • If you substitute all of them here, then the upper control limit turns out to be 12.417 plus 0.577, which is the value of A2, times 0.119.
  • The lower control limit is x double bar minus A2 times r bar, which means 12.417 minus 0.577 times 0.119, and the maths suggest that it is 12.348.
  • The upper control limit for the r chart turns out to be D4 times r bar, and so it turns out to be 2.144 times 0.119, which leaves a value of 0.252.
  • The lower control limit is given by D3 times r bar and D3 happens to be 0.
  • The lower control limit is going to be zero.
  • This is how we go about setting up the control chart limits for a variable-based control chart, namely x bar chart and the r chart.
  • We have the numbers calculated for the upper and lower control limit.
  • What we notice is since all the data is within the two control limits, we say that the x bar chart is under control, process is under control from an x bar chart perspective, and you have a similar plot for the r chart also.
  • The lower limit was zero, if you may recall, and this is a higher limit and all the values are between the lower and the upper values.
  • This chart is also-from this perspective this process is under control.
  • We shall now see how exactly to set up a control chart.

Week 5: Assuring Quality in Operations > 5.3 Statistical Process Control > Using Control Charts for Operational Decisions

  • First, is the process out of control? What are we supposed to do in that case? The second question is is there a way we can detect an impending out-of-control situation much earlier? So, let’s analyze these two questions one by one.
  • First, we shall take the case of the process going out of control.
  • Detecting whether a process is out of control is fairly straightforward.
  • If we notice any outliers in the plot, the process is not in a state of statistical control.
  • What you see here is a sample c chart, and once the lower control limit and the upper control limits have been established, the process center is also established.
  • Step 3, if there are no assignable causes, then resume the process with the revised control parameters, or if there are assignable causes, implement counter-measures and change the process in whatever way it is required and resume the process.
  • Step 5 is to stabilize the process, re-establish the control limits, and monitor the process and make sure that the process is in control.
  • These are the things that we need to do when we notice that the process could be potentially out of control because there is an outlier.
  • Over several years, some useful rules have been created that helps the operating personnel to detect a possible drift in the process much earlier.
  • Here is a p chart that you see here, and this process is in control.
  • As of now although the process is in control, statistical control.
  • It may be quite sensible to stop the process and conduct some investigations so that if some assignable causes are developing, we can catch them early.
  • One can use many simple rules to detect such trends which may point to some kind of non-randomness in the process.
  • This region is called Zone B, and, of course, the last one which is a blue zone, I call it as Zone A. So, with this understanding of Zone A, Zone B, and Zone C, number of rules have been developed which suggest that it may be time to stop the process, not to change the parameter but stop the process.
  • If there are problems, one can change the process parameters or resume the process.
  • Or two out of three points in a row in Zone A or beyond or four out of five points in a row in Zone B and beyond or fifteen points in a row in Zone C. As you might appreciate, what is common to all these apparently simple rules is that the behavior of the process does not appear to be random anymore.
  • So by stopping the process and performing some investigation, we could have averted a quality problem waiting to happen a little later.
  • Once we have a process in a state of control, we can use it even to predict what would be the number of defects this process is likely to generate.
  • That’s a very interesting, useful insight that we can get out of a process which is in a state of control.

Week 5: Assuring Quality in Operations > 5.3 Statistical Process Control > Establishing Quality Standards in Supply Chain Partner Processes

  • We have been concentrating only on the issue of making a process statistically stable and controlled, which is very important.
  • If you may recall, the voice of the customer is expressed through the target value, the USL, and the LSL. So, what we need to do is to superimpose these three values on the process which has its own LCL and UCL and the center and see the implications of that.
  • The only difference though is Process A, which is the blue line, has a very wide spread indicated by a very long gap between upper control limit and lower control limit.
  • Right? And Process B is having a lower spread because this is where is the spread of the process although they are-both are centered identically.
  • Now before getting into the customer parameters, the question is which process is better? Is it Process A or Process B? The shaded area is defect because it is beyond the specification limits.
  • Process A has the potential to generate defects because its spread is beyond the USL or the LSL, whereas, Process B is not.
  • In general, what we can say is Process B is better than A, and to generalize this argument, all other things being equal, spread of a process is indicative of its capability.
  • The center of Process A is the same as the target, whereas in the case of Process B, there is an offset indicated here.
  • This is the center of Process B, and this is the customer’s requirement in terms of target.
  • Clearly, Process A is better than Process B. In other words, what we are saying is if you have to choose between two processes which are identical in terms of their capability, then a process that is aligned closer to the desired target is likely to be more capable simply because all this region are defects.
  • Process B has more propensity to generate defects out of the process compared to Process A. So, if you summarize all these learnings from these two graphs, first of all, process capability is defined by the spread of the process.
  • Potential capability, what is the potential of a process? So, this measure is called Cp. Potential capability is called Cp. Cp is the ratio of specification range to the process capability indicated by USL minus LSL over six sigma.
  • It is indicated as Cpk, and that takes into consideration the extent to which the process has deviated or offset from the target.
  • Now why do you want to calculate Cpk? The calculation looks very simple, but the question is why do we want it? Now let’s look at a table which will tell us what is the value of knowing Cpk of a process? So, here is a table which relates the Cpk to how much defects the process will produce.

Week 5: Assuring Quality in Operations > 5.4 Establishing Quality in Service Organizations > Establishing Quality in Service Settings

  • Let’s first start with some familiar service quality issues.
  • These three examples of poor service quality point to a certain unique features of service quality.
  • Because of this reason, services cannot be counted, measured, inventoried, tested, and verified in advance to assure quality.
  • It’s also very difficult to understand how consumers perceive and evaluate their services.
  • The consistency of their behavior from the service personnel, at times even from the customer who is receiving the service, is very difficult to assure.
  • So quality occurs during service delivery while the consumer interacts with the service personnel.
  • These three unique features of a service set some broad principles that drive the service quality.
  • First of all, a service quality is a measure of how well the service delivered matches with the expectation.
  • The second aspect, which is very important in service quality, is quality evaluations in services are not made solely on the outcome of the service.
  • How was this whole service handled? Think of these airline delays and the way it was handled.
  • If a service fails, recovering a service is very difficult.
  • A failed service hurts a customer and the customer exits the service system.
  • All these are pointing to the importance of service quality and a way by which we need to really address the issue of service quality.
  • Given these complexities in service quality, the question naturally is how are we going to measure the service quality? So, what you see here is the various aspects of a service quality.
  • First of all, let’s be clear, the service quality is the difference between perceived service and the expected service.
  • The second one, even if the management knows, it does not have the wherewithal to translate it into service quality specs.
  • A third possibility happens where even if you translate the expectations into certain quality specs, the service delivery may not deliver.
  • Finally, there could be a gap between what a service delivery can do and what you inform the potential customers what you can deliver.
  • Gap 1, which is a gap between the expectation of the customer and the understanding of the management, happens primarily because service firm executives may not understand what the consumer wants or what features of a service they must provide or what levels of performance.
  • Gap 4 primarily happens because service firms sometimes in their anxiety to improve the sales may tend to promise more in communication than what they deliver in reality.
  • Instead of generally saying that it is very difficult to measure service quality, it has provided a basis to understand why service quality happens and where should we lay our hands.
  • An understanding of these issues will definitely help organizations invest on specific aspects in a service delivery system to reduce these gaps.
  • It could be perhaps how do I fine tune external communications, design of brochures and publicity material, and so on? In conclusion, it is important to remember that use of control charts are very valuable in highlighting problem areas, addressing them at an operational level-all these will happen in service organizations.
  • This perspective of service quality that we saw will be valuable at the design and planning stages in a service organization.
  • You may want to relate these to your work settings and take steps to narrow the gaps in service quality.

Week 5: Assuring Quality in Operations > 5.5 Week 5 Wrap-up > Summary

  • First is a methodology known as DMAIC to sustain the journey towards near-zero defects in a structured way? Second, an organization structure consisting of coaches, champions, team members, and a sponsor to provide mandate, budget, oversight, and support.
  • Third, use of statistical tools and a host of other tools to make sustained improvements.
  • After an understanding of the Six Sigma methodology, we went further and looked at a number of influential thinkers, their teachings, and how they have shaped our perspectives about quality management.
  • They have also proposed alternative methodologies and tools for quality management.
  • As we saw, modern day quality managements systems make use of many of these ideas, tools, and techniques.
  • The use of statistical principles to continuously monitor the process and to highlight problems is an integral component of a Six Sigma organization.
  • Service quality is more challenging than product quality as it is a function of the perceptions of the customers.

Return to Summaries

(image source)

 

Print Friendly, PDF & Email