Jump to content

tobarstep

Registered Users - Approved
  • Posts

    71
  • Joined

Posts posted by tobarstep

  1. I realize this post is almost 10 years old, but did you ever find a resolution to this?

     

    What's really strange is that I haven't visited this site in many months until today, and I come across this thread. Sadly, I don't remember what became of this. We've upgraded FP quite a few times since then.

  2. So, I have solved my own issue by doing something that I didn't think would actually work. It seems the quad attribute of the <p> tag can actually have more than one value, so starting my text with <p br="false" quad="JC"> actually ends up both centering AND justifying. Interestingly, quad="CJ" does not work.
  3. I don't think this is necessarily a JavaScript question, but it seemed a good place to get answers. I have a problem posed to me by our design team that they've been unable to overcome. They have a block of text that needs to be fully justified in the text box, but whatever is left on the final line needs to be centered. Applying justification by itself causes the final line to be left-aligned. I've tried several combinations of the <span> and <p> tags to no avail. I can get the final line centered, but even when it's wrapped in a <span> tag it forces all of the preceding text to be centered instead of fully justified. Of course, this was done on static text, which the final product will not have, making the issue even more complex. Any ideas?
  4. One thing to note about standard Code 39 is that the asterisk (*) is used to delimit the data and can't normally be used in the data. That yours is allowing it and returning a /J seems to indicate that you're actually using an extended Code 39 as described here and here. I don't know abut the /M unless your dash is being interpreted as an en or em dash. The second link I posted also indicates that if you are using the extended ASCII Code 39 then your scanner needs to be set up for it or it will return the data as /J, etc.

     

    I'd try encoding just some standard alphanumeric data [0-9][A-Z] and see how that scans.

  5. I've never used the drag and drop before, but I whipped up a quick sample and it worked as intended. I'm attaching a jpg screenshot of what my rule looked like in the drag and drop editor window (my field name was "Qty_S" and I used 25pt font, but you get the idea).

     

    After I saved the rule I inserted it into a text frame and it worked fine. Incidentally, if you put in the rule that way and then click the button to convert the rule to JavaScript you get essentially the combination of what users FreightTrain and step posted.

    2-25-20152-03-04PM.jpg.c45c413f0d946f3c03eccb9702baf563.jpg

  6. This is 100 percent by design. All the log, intermediate, and other files written by FusionPro are now 16-bit Unicode files. The major change in FusionPro 9.2 was the addition of support for Unicode throughout the system, including Japanese and Chinese file name and UI support. The Notepad app in Windows Vista and later should be able to display these files properly, although if you're running XP, you may need to use a different text file editing/reading app.

     

    Note that there may be some line ending inconsistencies in certain files, especially in log (.msg) files which have had lines with \n or \r characters written to them via the Print() function in JavaScript, although files such as CFG and DIF files should all have consistent line endings.

     

    OK, that makes perfect sense with what we've seen. We upgraded to 9.3 from 8.7 so this was the first time we'd seen the files as Unicode. Now that we've made the adjustments to our scripts everything works just fine.

  7. I don't see it mentioned anywhere here, so I'll throw this in since we are currently using the pre-release 9.3.12 version. We haven't noticed any real issues but there are a few minor quirks that have popped up.

     

    1) The rule editor window is displaying fonts at an unreadable size (extremely small). The problem seems to be that no font is selected in the editor window settings. Opening the settings dialog and selecting a font fixes this. This is after upgrading systems that had displayed properly previously.

     

    2) Files like the "cfg" and "msg" now seem to be Unicode instead of ASCII. We had some users open a cfg in notepad.exe to make changes and it appeared to them as one, run-on sentence. I had them open it in another text editor (Notepad++ in this case) and it displayed fine. Maybe the line terminators have been changed from CRLF to just CR? We also have some VBscripts running externally that parse msg files looking for certain data. I had to alter the scripts to force the text streams to be read as Unicode, otherwise they were returning gibberish as I assume they were defaulting to read ASCII.

     

    Neither of these are necessarily "broken", as we've taken care of both. I just wanted to bring them up as much for clarification as reporting any issues.

     

    (I'll need to update my signature from 8.2.7 to 9.3.12)

  8. This is discussed at length in the announcement topic here:

    http://forums.pti.com/showthread.php?t=3982

     

    It has been fixed for an upcoming release 9.3.12.

     

    Thanks. I was having trouble figuring out what was causing the preprocessing in one of my templates and not the other. It seems someone had incorrectly set the imposition in one of them to stack even though the data files are only 1 record each. I guess I'll have to postpone testing anyway since we do have many more jobs that do require a stacked imposition.

  9. I posted this in the VDP Producer API (server) forum but maybe that wasn't the proper place for it, so I'm linking it here. Anyone know if this is an actual issue or something that can be corrected with a change to a setting somewhere? If it can't be corrected, then it might actually stop us from upgrading as it would cause conflicts in a number of automated processes we have.

     

    http://forums.pti.com/showpost.php?p=15883&postcount=1

     

    Thank you.

  10. The CreateResource() is returning an error since the file you're looking for doesn't exist. You could wrap it in a try/catch block. So, if you're only looking at aao_1 when aao_2 is empty, this worked for me:

     

    if (Field("aao_2") == "") {
    try {
    	CreateResource(Field("aao_1") + ".pdf");
    	activatePage(1,2);
    	return;
    } catch (e) {
    	// Do nothing
    }
    }
    
    activatePage(3,4);
    

     

    Of course in my testing I didn't have your custom function so I was just using FusionPro.Composition.SetBodyPageUsage(), but the principle is the same.

  11. We are in the process of upgrading our server installs to v9.3.9 from v8.2.7. I've been running some parallel tests and have noticed an issue with the output file names. Under v9, whenever a job has some preprocessing, the resultant output file name has a "1" appended to it. We have not seen this behavior under v8. Nor do we see this under v9 when a job has no preprocessing.

     

    It seems like the output file is actually being created twice. For example, if I already have a file "Sample.pdf" residing in the output folder and I run a new job that will compose "Sample.pdf", then under v8 the file gets deleted and the new file takes its place with the same name. Under v9 the file still gets deleted but the final output file that gets created is actually "Sample1.pdf". Even if the output file does not previously exist, the "1" is still appended to the file name. We specify the output file name in the command-line invocation but it is still being altered.

     

    I'm attaching 2 "msg" files (extension changed to "txt" so I could upload) that show I'm using the same set of "cfg" and "dif" files for the preprocessing job.

     

    So, is there a setting somewhere I can change for this? I don't see anything in the "cfg" for the template.

    version8_preprocess_example.txt

    version9_preprocess_example.txt

  12. Don't get me wrong, I avoid .csv myself. It just seemed odd is that is what works in this (my) case. That's the dilemma to me.. It works on my system and not on yours.. I am attaching the data file I used for mine. Plug it into yours and let's see..

     

    Your file works for me as well. Looking at your data though, I can see that whatever application you exported it from added the double-quote text qualifiers by default. This actually furthers my suspicions that FP is seeing the quotes as text qualifiers rather than simply characters in the data if the quotes appear at the start of the string. For a CSV that's fine. For a tab delimited file I don't think that is desired behavior. I guess my only recourse at this point is to add an extra logic layer prior to FP composition to reformat fields containing double quotes.

     

    It also looks like the quotes that are inside the string got converted to a "fancy" quote of some sort as the character didn't transfer well from the Mac to the PC, but that's not really an issue.

    textqual.png.ae0d73ba1ff61d71e307ca58c4fe9bee.png

  13. Did you see my screenshot? I worked fine.. I took a Filemaker file, exported as merge file (a csv file) placed it treating fields as formatted data and it works fine..

     

    Tab delimited and a quote as the first character are particularly evil.. Excel routinely sprinkles quotes and single quotes at the beginning of fields.

     

    That being said, did you get yours to work? If not, try .csv..

     

    Mark

     

    Thanks for looking at it, but CSV is not an ideal format for my purposes. Since my data frequently contains commas, CSV would necessitate the use of a text qualifier, which is typically double quotes. I suspect that is what is happening here. Even though I'm using tab delimited and not CSV (which is really just comma delimited) FP is still seeing the double quotes as text qualifiers. That sort of defeats the purpose of using a tab delimited format.

     

    Though, just out of curiosity I tried a CSV file, but got the same results anyway.

    examplecsv.png.7dbd1109343c2e25faba43ad150f76b1.png

  14. But, back to the original issue - has it been in excel? I was going to send back a small chunk that has not touched Excel to see if it makes a difference for you..

    Understand the data is sensitive.. Can you make a small (5 record) chunk and strip any sensitive info?

    Mark

     

    Update: I made a small file with quotes in the beginning, middle and end and exported as csv. If I can attach screenshot here, you should see it does work..

     

    No, the data has never been in Excel. Here is a screenshot of a test I just did. I typed this directly in Notepad, hitting tab between fields. I had to chop it up some to get in the file size limits, but it should all be visible.

    example.png.ea78cb412038dd2bdfb9cf3d7c1fb05c.png

  15. Thanks Mark. Unfortunately it is a bit sensitive still.

     

    I did do some experimenting though and found that if a field has a double quote in the middle, it is fine. It's only when the first character of the field is a double quote does it cause a problem.

     

    I have a date field in the data which is

    5/7/2009

    If I put quotes around the middle

    5/"7/"2009

    then that is exactly what displays. If instead I put quotes at the front

    "5/"7/2009

    then all I get is the 5/.

  16. I'll see if I can attach a screenshot of what I'm talking about. This is a shot of the preview record selector, which I don't believe is impacted any particular formatting rules. In the tab delimited text file, that field has the full text the double quotes around the AAA but it is not being read for some reason.

    doublequotes.png.295c7096cd725b360c19a49ed0ed4e01.png

  17. Thank you for your reply, Alex. Unfortunately the data is not being read into FP to begin with as far as I can tell. The specific record in question is for a sale sign for batteries. In the text file, the field reads

    "AAA", 8-ct package

    but when looking at the record in either the preview record selector or the Fields tab in the Show Building Blocks dialogue, all that appears for that field is

    AAA

    In my rules I am already using the normalize entities function.

  18. Using FP 6.0P1e desktop/WinXP and server/2003 (happening on both)

     

    I'm composing from a tab-delimited text file, but when I attempt to compose a record with double quotation marks included in the data, FP seems to consider those to be further text delimiters within a field. I have my data source definition set up as a tab delimited file, and I have tried changing the option "Treat field values as tagged text". Neither way made any difference.

     

    This isn't the actual data, but for example a field containing:

    "A"bc

    is only being read into FP as

    A

    , ignoring both the quotes and the text outside the quotes. This is strictly for that field however, as the rest of the record shows fine.

     

    Is there a way around this? I don't always have control over the data that I'm sent, and though I have asked them to refrain from using double quotes in the future I'd rather find a way to make it work.

×
×
  • Create New...