Quantcast
Channel: Pentaho Community Forums
Viewing all articles
Browse latest Browse all 16689

Which approach - any advice

$
0
0
Hi I'm looking for some advice on the best way to manage a big project in Pentaho.

We're working with a large number of different data streams (over 400) that are published as CSV files online. We have no ownership over the publication procedures, so we often get inconsistent data, e.g. columns renamed, new rows added to the header, different date formats.

The data is transaction data, so there are no more than 12 columns in the most complex files and often only four or five columns in each file. Obviously there is a lot of data commonality here, with each file containing at least the following columns: buyer, supplier, value and date.

It seems that I can take one of two options to handle this:

1. Create a transformation for each data stream and maintain / update those as each data file changes.

2. Create a form of look up table that indicates what data is in which column and in what format the data is in, and then create a transformation for each single format variation. (e.g. Org 1 = date(mm/dd/yyyy), value($0.00), buyer(varchar255), supplier(varchar255).

The second option looks the most efficient in the long-term, but it seems complex to develop and might stretch my meagre skills. If the best option is the second one, what would be the best way to implement this option.

Any advice?

W

Viewing all articles
Browse latest Browse all 16689

Trending Articles