Jump to content

QC procedures for variable data


Recommended Posts

We are seeking some guidance in streamlining and improving our production procedures related to variable data. We have had a few close calls when it comes to accurately getting the data represented on the final piece for some of our larger mailers and the risk of bad data cannot be understated. We are attempting to create a bullet-proof means, through random sampling and quality assurance procedures, that provide us and our clients a comfort level that all is as expected on the final product.

 

Our desire is not to reinvent the wheel, but rather to benefit from the processes that have already been created. If you are able to provide us any information to help us with this endeavor it would be greatly appreciated.

Link to comment
Share on other sites

Dan,

 

I think the point of my inquiry is rather specific, and I don't mean to sound condescending.

 

I am seeking from the user community some best practice procedures when handling complex variable compositions. The type of errors encountered has no bearing on my desire to know what practices are being employed to accurately spot check the piece assuring accuracy in the final product.

 

The need to validate the composition process is paramount and the challenge involved magnifies as complex rules interact with customer supplied data.

 

I am interested in the following:

 

1. How to manage random sampling when dealing with databases that exceed 100,000 pieces, the number of pieces that must be reviewed when dealing with these types of numbers prevent individual inspection of each random piece. I cannot examine 3000 or more pieces to verify that there is accuracy in each record. How is this being done in other environments?

 

2. When applying complex rules involving calculations, or if statements what is the best manner to check a random group of records that address each potential return value?

 

3. Where to find the recommended procedures that assures we are not generating bad content that may be hidden within our final output?

 

I hope that this provides a clearer understanding of what it is that I am looking for.

 

Thanks to all who are able to assist us in this matter.

Link to comment
Share on other sites

I use the preview pane to test long, short or empty records in my data. This helps spot reflow and copyfit problems.

 

I compose a "random" set of sequential records as a test file (ie. 8800 through 8810). If time permits I do it a couple times for more samples especially surrounding some long or short data found by doing the step above. Actually print these test files too to see if there are any image or font issues.

 

Do everything reasonable to check the data file before bringing it into FusionPro. For example, look for dropped decimal places or dollar signs in number columns.

 

Rules will often give a warning in the composition dialog box when they run into empty fields or error. Pay attention to this dialog as the file is generated. If it runs into problems here and there it may not be a big deal on a long print run because of spoilage tolerances. What I look for is when it errors on a consecutive string of records.

 

As a last line of defence, spot check by pulling sheets on during the print run.

 

Perfection on a long run/data file is nearly impossible. There is no bullet-proof way to catch every error caused by weird data, especially when you don't control the data. Just implement common sense QC procedures at every step and do as much testing as costs and time allows.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...