<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body text="#000000">
<p>I'm using DIF files for importing and when the import file has
line feeds it tells me that it's not a valid DIF file. I'm pretty
sure it's line feeds and not carriage returns causing the issues
for me because when I did a search for Ctrl+J and it didn't find
anything.</p>
<p>Just to test it out, I used excels CLEAN and remove all of the
CR's and LF's before importing the data which worked and I didn't
get any errors. The problem is now everything is lumped together
into one big paragraph which is was trying to avoid in the first
place.</p>
<p>If I added an "~" or something unique to the beginning and the
end of the cells with the line feeds before saving it as a DIF for
importing, and I'm telling filepro that the ("~" or whatever) is
the field marker, is it still going to give me the not a valid DIF
file error if that field still contains line feeds. And if it that
actually works, will the line feeds actually show in filepro after
the import. <br>
</p>
<p>If this enables me to import the data into the memo with all of
the line feeds intact that would be great although I'm sure I'll
be needing to change them to something else before exporting if
filepro is going to be stopping the export after the first LF
because it'll recognize it as an end of field marker similar to
CR's. <b>Robert Helbing</b> posted a nice example of how he goes
about doing that though so that'll be a big help, :)</p>
<p><br>
</p>
<p><font color="#3333ff">* Or maybe pre-processing the data is the
most practical way to go after all. <br>
</font></p>
<p>I'm always pre-processing the data in the worksheets I receive
from my vendors anyway so one more step isn't going to be an
issue.<br>
</p>
<p><font color="#3333ff">* If there are CR's or LF's or both within
the cells, then how ARE the records delimited, if not by CR?
Maybe you don't need to do anything but adjust the record
delimiter option on the import command line in processing. That
would be much better than cobbling together some external
pre-processing steps.</font></p>
<p>I just save the XLS file as a DIF after I sort out and arrange
the fields for the import. I always know how many columns are in
the import file so I just do an END after the last one when doing
the import, I'm not actually looking for delimiters.<br>
</p>
<p>Mike<br>
</p>
<p><br>
</p>
<br>
<div class="moz-cite-prefix">On 8/29/2017 12:14 PM, Richard D.
Williams via Filepro-list wrote:<br>
</div>
<blockquote type="cite"
cite="mid:627f002b-bfaa-23c3-f89c-a6a9c2abb88d@appgrp.net">This
tip from Mark will not work;
<br>
<br>
Why not just translate the \r to \001 before import, and then
whenever
<br>
outputting or otherwise referencing the data on the way out,
translate \001
<br>
to \r?
<br>
<br>
If you replace one \r with a \001, you replace all. Therefore the
row terminator is still not unique.
<br>
<br>
There is a PHP Class that will read column values directly from an
xls file.
<br>
<br>
<a class="moz-txt-link-freetext" href="https://stackoverflow.com/questions/13439411/from-xlsx-sheet-rows-to-and-php-array">https://stackoverflow.com/questions/13439411/from-xlsx-sheet-rows-to-and-php-array</a>
<br>
<br>
But, as Mike is not a programmer, this may not be a solution for
him to pursue.
<br>
<br>
Richard D. Williams
<br>
<br>
On 8/28/2017 6:27 PM, Brian K. White via Filepro-list wrote:
<br>
<blockquote type="cite">On 8/28/2017 3:16 PM, Fairlight via
Filepro-list wrote:
<br>
<blockquote type="cite">They call it -logic-. ...
<br>
</blockquote>
<br>
I don't see that Mike did anything wrong deserving of these
responses.
<br>
<br>
He's not a programmer and lacked the background to make sense of
your initial answer. That is not illegal. Once you establish
that, just switch into that mode.
<br>
<br>
<br>
Mike, this kind of thing can be addressed several possible ways,
and Mark's first answer was really just meant as a general
description or basic outline of one possible approach, not any
kind of exact commands or details. No one could offer any exact
details yet at that point because we didn't yet know enough
details about your situation. First comes picking a possible
approach, then comes hashing out what exact details are required
to make that happen.
<br>
<br>
That idea might not have been practical for you for some reason,
or might be doable but might not be the most convenient option
available. It was too early at that point to do more than
essentially spitball some different basic possible approaches.
<br>
<br>
For instance:
<br>
<br>
* If there are CR's or LF's or both within the cells, then how
ARE the records delimited, if not by CR? Maybe you don't need to
do anything but adjust the record delimiter option on the import
command line in processing. That would be much better than
cobbling together some external pre-processing steps.
<br>
<br>
* Or, you could do your own more manual parsing of the data by
using open and readline instead of import. Then you could detect
incomplete records and read in the next line, and concatenate to
the previously read line, repeat until you know you have read in
a complete record (by counting the commas that weren't inside
any quotes or something). This would not be convenient to write
at all, but it's one and only advantage is, it would be done
entirely in filepro processing and doesn't depend on any
external programs or batch files, and once it's done, it's done.
It'll then 'just work" forever.
<br>
<br>
* Or, if you control the export, then maybe you could alter the
export to make it possible for fp to import it reliably. (again,
even this one idea, has many possible meanings. Impossible to
guess them before even knowing the first thing: "if you control
the export". Only IF that is true, the next question would be,
what program are you exporting from? maybe that program has some
option to specify the record delimiter, or if not, maybe you can
add another column to the data that you could then look for in
processing to detect the real end of the record. Or maybe you
could export in a fixed-length format, and then the filepro
import wouldn't even look at any LF or CR at all. They would
just be bytes like any other bytes.)
<br>
<br>
* Or maybe pre-processing the data is the most practical way to
go after all.
<br>
<br>
</blockquote>
<br>
_______________________________________________
<br>
Filepro-list mailing list
<br>
<a class="moz-txt-link-abbreviated" href="mailto:Filepro-list@lists.celestial.com">Filepro-list@lists.celestial.com</a>
<br>
Subscribe/Unsubscribe/Subscription Changes
<br>
<a class="moz-txt-link-freetext" href="http://mailman.celestial.com/mailman/listinfo/filepro-list">http://mailman.celestial.com/mailman/listinfo/filepro-list</a>
<br>
</blockquote>
<br>
</body>
</html>