<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Mike,<br>
<br>
I'd avoid using the standard import formats (WORD, DIF, etc.) when
doing anything non-standard. You end up chasing formatting issues
like the ones you point out here. Instead use ASCII or READLN. <br>
With IMPORT ASCII you can use my previous idea of saving the file
with only LF (not CR/LF) as an EOL (with Notepad++ it;'s under
Edit--> EOL conversion) and then instruct Filepro to use LF as
the record separator. The CRs in the notes will remain and
shouldn't cause Filepro to read them as a new record.<br>
<br>
Boaz<br>
<br>
<blockquote type="cite">
<pre wrap=""><div class="moz-txt-sig">Date: Tue, 29 Aug 2017 14:31:01 -0400
From: Mike Fedkiw <a class="moz-txt-link-rfc2396E" href="mailto:mfedkiwfp@gmail.com"><mfedkiwfp@gmail.com></a>
To: "Richard D. Williams" <a class="moz-txt-link-rfc2396E" href="mailto:richard@appgrp.net"><richard@appgrp.net></a>, 0 Filepro List
<a class="moz-txt-link-rfc2396E" href="mailto:filepro-list@lists.celestial.com"><filepro-list@lists.celestial.com></a>
Subject: Re: Importing/exporting data with carriage returns (I Think)
Message-ID: <a class="moz-txt-link-rfc2396E" href="mailto:1a608328-273b-1056-f7ef-7bbee51f8cf7@gmail.com"><1a608328-273b-1056-f7ef-7bbee51f8cf7@gmail.com></a>
Content-Type: text/plain; charset="utf-8"; Format="flowed"
I'm using DIF files for importing and when the import file has line
feeds it tells me that it's not a valid DIF file. I'm pretty sure it's
line feeds and not carriage returns causing the issues for me because
when I did a search for Ctrl+J and it didn't find anything.
Just to test it out, I used excels CLEAN and remove all of the CR's and
LF's before importing the data which worked and I didn't get any errors.
The problem is now everything is lumped together into one big paragraph
which is was trying to avoid in the first place.
If I? added an "~" or something unique to the beginning and the end of
the cells with the line feeds before saving it as a DIF for importing,
and I'm telling filepro that the ("~" or whatever) is the field marker,
is it still going to give me the not a valid DIF file error if that
field still contains line feeds. And if it that actually works, will the
line feeds actually show in filepro after the import.
If this enables me to import the data into the memo with all of the line
feeds intact that would be great although I'm sure I'll be needing to
change them to something else before exporting if filepro is going to be
stopping the export after the first LF because it'll recognize it as an
end of field marker similar to CR's. <b class="moz-txt-star"><span class="moz-txt-tag">*</span>Robert Helbing<span class="moz-txt-tag">*</span></b> posted a nice
example of how he goes about doing that though so that'll be a big help, <span class="moz-smiley-s1" title=":)"></span>
* Or maybe pre-processing the data is the most practical way to go after
all.
I'm always pre-processing the data in the worksheets I receive from my
vendors anyway so one more step isn't going to be an issue.
* If there are CR's or LF's or both within the cells, then how ARE the
records delimited, if not by CR? Maybe you don't need to do anything but
adjust the record delimiter option on the import command line in
processing. That would be much better than cobbling together some
external pre-processing steps.
I just save the XLS file as a DIF after I sort out and arrange the
fields for the import. I always know how many columns are in the import
file so I just do an END after the last one when doing the import, I'm
not actually looking for delimiters.
Mike
On 8/29/2017 12:14 PM, Richard D. Williams via Filepro-list wrote:
</div></pre>
<blockquote type="cite" style="color: #000000;">
<pre wrap="">This tip from Mark will not work;
Why not just translate the \r to \001 before import, and then whenever
outputting or otherwise referencing the data on the way out, translate
\001
to \r?
If you replace one \r with a \001, you replace all. Therefore the row
terminator is still not unique.
There is a PHP Class that will read column values directly from an xls
file.
<a class="moz-txt-link-freetext" href="https://stackoverflow.com/questions/13439411/from-xlsx-sheet-rows-to-and-php-array">https://stackoverflow.com/questions/13439411/from-xlsx-sheet-rows-to-and-php-array</a>
But, as Mike is not a programmer, this may not be a solution for him
to pursue.
Richard D. Williams
On 8/28/2017 6:27 PM, Brian K. White via Filepro-list wrote:
</pre>
<blockquote type="cite" style="color: #000000;">
<pre wrap="">On 8/28/2017 3:16 PM, Fairlight via Filepro-list wrote:
</pre>
<blockquote type="cite" style="color: #000000;">
<pre wrap="">They call it -logic-. ...
</pre>
</blockquote>
<pre wrap="">I don't see that Mike did anything wrong deserving of these responses.
He's not a programmer and lacked the background to make sense of your
initial answer. That is not illegal. Once you establish that, just
switch into that mode.
Mike, this kind of thing can be addressed several possible ways, and
Mark's first answer was really just meant as a general description or
basic outline of one possible approach, not any kind of exact
commands or details. No one could offer any exact details yet at that
point because we didn't yet know enough details about your situation.
First comes picking a possible approach, then comes hashing out what
exact details are required to make that happen.
That idea might not have been practical for you for some reason, or
might be doable but might not be the most convenient option
available. It was too early at that point to do more than essentially
spitball some different basic possible approaches.
For instance:
* If there are CR's or LF's or both within the cells, then how ARE
the records delimited, if not by CR? Maybe you don't need to do
anything but adjust the record delimiter option on the import command
line in processing. That would be much better than cobbling together
some external pre-processing steps.
* Or, you could do your own more manual parsing of the data by using
open and readline instead of import. Then you could detect incomplete
records and read in the next line, and concatenate to the previously
read line, repeat until you know you have read in a complete record
(by counting the commas that weren't inside any quotes or something).
This would not be convenient to write at all, but it's one and only
advantage is, it would be done entirely in filepro processing and
doesn't depend on any external programs or batch files, and once it's
done, it's done. It'll then 'just work" forever.
* Or, if you control the export, then maybe you could alter the
export to make it possible for fp to import it reliably. (again, even
this one idea, has many possible meanings. Impossible to guess them
before even knowing the first thing: "if you control the export".
Only IF that is true, the next question would be, what program are
you exporting from? maybe that program has some option to specify the
record delimiter, or if not, maybe you can add another column to the
data that you could then look for in processing to detect the real
end of the record. Or maybe you could export in a fixed-length
format, and then the filepro import wouldn't even look at any LF or
CR at all. They would just be bytes like any other bytes.)
* Or maybe pre-processing the data is the most practical way to go
after all.
</pre>
</blockquote>
<pre wrap="">_______________________________________________
Filepro-list mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Filepro-list@lists.celestial.com">Filepro-list@lists.celestial.com</a>
Subscribe/Unsubscribe/Subscription Changes
<a class="moz-txt-link-freetext" href="http://mailman.celestial.com/mailman/listinfo/filepro-list">http://mailman.celestial.com/mailman/listinfo/filepro-list</a>
</pre>
</blockquote>
<pre wrap=""></pre>
</blockquote>
<br>
<div id="DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2"><br />
<table style="border-top: 1px solid #D3D4DE;">
<tr>
<td style="width: 55px; padding-top: 13px;"><a href="https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient&utm_term=icon" target="_blank"><img src="https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif" alt="" width="46" height="29" style="width: 46px; height: 29px;" /></a></td>
<td style="width: 470px; padding-top: 12px; color: #41424e; font-size: 13px; font-family: Arial, Helvetica, sans-serif; line-height: 18px;">Virus-free. <a href="https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient&utm_term=link" target="_blank" style="color: #4453ea;">www.avast.com</a>
</td>
</tr>
</table><a href="#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2" width="1" height="1"> </a></div></body>
</html>