Quantcast
Channel: Pentaho Community Forums
Viewing all 16689 articles
Browse latest View live

Problem making a snowflake schema

$
0
0
Hello everybody,

I'm using the schema workbench to create a snowflake schema.
The tables I have are:
Fact_table, dim_brand_model, dim_brand, dim_group
linked to each others like this:

FT_IMMAT (BRAND_MODEL_ID FK) <----> DIM_BRAND_MODEL(BRAND_MODEL_ID PK, BRAND_ID FK) <----> DIM_BRAND(BRAND_ID PK, GROUP_ID FK) <----> DIM_GROUP(GROUP_ID PK)

I am able to make the relation between the three first tables using the following code but can't create the relation between the dim_brand and the dim_group

Code:

<Schema name="immat">
  <Cube name="cube_immat" visible="true" cache="true" enabled="true">
    <Table name="ft_immat">
    </Table>
    <Dimension type="StandardDimension" visible="true" foreignKey="brand_model_id" highCardinality="false" name="Dim brand_model">
      <Hierarchy name="Brand" visible="true" hasAll="true" primaryKey="brand_model_id" primaryKeyTable="dim_brand_model">
        <Join leftKey="brand_id" rightKey="brand_id">
          <Table name="dim_brand_model">
          </Table>
          <Table name="dim_brand">
          </Table>
        </Join>
        <Level name="Brand" visible="true" table="dim_brand" column="brand_id" type="String" uniqueMembers="false" levelType="Regular" hideMemberIf="Never">
        </Level>
      </Hierarchy>
    </Dimension>
....
  </Cube>
</Schema>

Any one has faced the same problem before or has any information about how to make a snowflake schema?

I appreciate any help

Thank you all

Logging Issue with Kettle

$
0
0
I am having issues with the logging in Kettle using Java API. When multiple kettle jobs run at the same time the log files are written to by all jobs. I have provided name of the log file for each job. Each job should be writing to a separate log file which has an unique location.
However, when the jobs are run at the same time the logs overlap in the sense all jobs logs overlaps, so the output from JOB A is written to JOB B logs etc. Is there a way to prevent this?


CALCULATED MEMBER on dimension not in ROWS

$
0
0
Hello,

I create a calculated member who gives me the difference between to date.
I use this MDX who works fine :

WITH MEMBER [Measures].[Delay] AS DateDiff("d",CDate([DateInit].[Date].CurrentMember.Properties("dateSQL")),CDate([Date].[Date].CurrentMember.Properties("dateSQL")))
SELECT
NON EMPTY {[Measures].[Delay]} ON COLUMNS,
NON EMPTY NonEmptyCrossJoin([Order].[Order].Members, NonEmptyCrossJoin({[Date].[2015].[mai 2015].[11/05/2015]}, [DateInit].[Date].Members)) ON ROWS
FROM [Sales]

But now I try to execute the same MDX without [DateInit] in my MDX ROWS and it doesn't work :

WITH MEMBER [Measures].[Delay] AS DateDiff("d",CDate([DateInit].[Date].CurrentMember.Properties("dateSQL")),CDate([Date].[Date].CurrentMember.Properties("dateSQL")))
SELECT NON EMPTY {[Measures].[Delay]} ON COLUMNS, NON EMPTY NonEmptyCrossJoin([Order].[Order].Members, {[Date].[2015].[mai 2015].[11/05/2015]}) ON ROWS FROM [Sales]

After quick search it's because in this case [DateInit].[Date].CurrentMember.Properties("dateSQL") return the value "All members".
Is anybody has an idea how to solve this problem

Thank you in advance
JB

How to run xaction in Pentaho CE 5.2?

$
0
0
I have a report which generates pdf output everyday when the scheduler runs. I want the current date in the pdf file name. I am thinking of achieving this using xaction. But I am not able to run the xaction in BI server. I googled hard to find a way to run xaction but no luck.
Where should I create xaction files for BI server to run it smoothly?

I have also created ktr file which runs Pentaho report and generete pdf output with date in the file name. Now, this needs to wrapped up in an xaction file to run within BI server.

How do I get Pentaho 5.2 CE BI server run xaction files?

Pentaho cassndra plugin support matrix

$
0
0
Hi all. Is any support matrix for 'pentaho-cassandra-plugin' available?

Just recently spent a half day in debugger and did not success at making cassandra output do output as desired.
I had 5.3 PDI and 5.3 plugin. We use 'apache-cassandra-1.2.18'.

When attempts to use PDI cassandra output step - shows green. I have opened in debugger - it is uses
https://github.com/pentaho/pentaho-c...dler.java#L159
query executed - but nothing really written.

Is it versions inconsistency?

Migrating Spoon from W7 to Ubuntu

$
0
0
Hi,

I've been working on a Windows 7 workstation, but now I'm going for an Ubuntu one.

We have a mySQL DB repository.
I dumped it via mySQL Admin
then I created a database and a user on the local Ubuntu computer
then I imported the Dump on the local DB
then I tried to connect via sppon and the issues started.

After a click on the 'Create or upgrade' button, an admin password is required : I tried all the password I ever used on my machine, no success.
Still, when I try to connect via the first popup "Repository connection", I get an error :

"The version of the repository is -1.-1.
This Kettle edition requires it to be at least version 5.0 and as such an upgrade is required.
Then select the 'Edit' button followed by the 'Create or Upgrade' button.
"

I noticed that the tables were created another time : the table names in the dump were in lower case, and the "create or update" button created other table in upper case.

Did someone ever managed to get a similar migration done ?
Have I other choice to get this working ?

explanation please

$
0
0
i was wondering if anyone can provide me with a scientific explanation for this issue.
Am using WEKA to process some data, when using J48, JRip on a singular feature, the results are higher than using Random forest on a singular feature, but when using random forest on several features at the same time, the results are higher than using J48 or Jrip.
can anyone explain to me why please.

explanation please

$
0
0
i was wondering if anyone can provide me with a scientific explanation for this issue.
Am using WEKA to process some data, when using J48, JRip on a singular feature, the results are higher than using Random forest on a singular feature, but when using random forest on several features at the same time, the results are higher than using J48 or Jrip.
can anyone explain to me why please.

Streamlined data refinery

$
0
0
Hi guys,
I'm currently going into Streamlined Data Refinely approach starting from this pentaho page: http://www.pentaho.com/Streamlined-Data-Refinery.


Starting from the point that this approach is possible only with Pentaho Data Integration EE which allows to create a job to design and publish data set on demand.


I'm asking to you in which way is possible to develop the process to automatize data refinery:
- which components are involved in the development?
- some guide?




Thanks
Nico

File Existense Transformation

$
0
0
Hi ,

Today I am looking for a solution for file existense to be tested in transformation and execute the condition accordingly.

File_Exist_Transformation.JPG

scenario
I am running this job attached screenshot above and linked transformation in "File existence check" transform for checking file existense.

IF Exist then run the true condition and generate some basic output else (False Condition)drop a mail with the error message file not found through mail Error(Mentioned above).

Below is a transform connected to File existense check.

File_Exist_testing.JPG

Issue

When I am running transform, I am getting result always true becasue of the transformation linked in "file existense check " is always returing a default value true in it.
Condition return true and since file did not found in the fetching directory, I get error message failure in a log file due to transform 2 which is looking for the file.

Requirment

Steps how to check particular file existence in any directory throught feature file exist.
and how to execute above job to get result.


All the jobs are attached in a mail.
Please use below attached transform and job for getting clear picture.

Internal error on aggregated set with calculated member using parallel period

$
0
0
Using Mondrian 4.0.0-SNAPSHOT (via Saiku's saiku-server) I have a Mondrian 4 schema including the following elements:




<CalculatedMember name=“Sales last year”
hierarchy="[Measures].[Measures]">
<Formula>
(
[Measures].[Sales],
ParallelPeriod(
[Time].[Y-W].[Year],
1,
[Time].[Y-W].CurrentMember
)
)
</Formula>
</CalculatedMember>


<CalculatedMember name=“weeks2015_sofar”
hierarchy=“[Time].[Y-W]”
<Formula>
Aggregate(weeks2015)
</Formula>
</CalculatedMember>





<NamedSet name=“weeks2015”>
<Formula>
{[Time].[2015].[1] : CurrentDateMember([Time].[Y-W], ‘[Time]\.[Y-W]\.[yyyy]\.[ww]')}
</Formula>
</NamedSet>

...


When running the following mdx query


SELECT
NON EMPTY {[Measures].[Sales], [Measures].[Sales last year]} ON COLUMNS,
NON EMPTY {weeks2015, weeks2015_sofar} ON ROWS
FROM [Sales]



I get a Mondrian excpetion: MondrianException: Mondrian Error:Internal error: member [Time].[Y-W].[weeks2015_sofar not found among its siblings.


Here is the relevant snippet from the log:
Caused by: mondrian.olap.MondrianException: Mondrian Error:Internal error: member [Time].[Y-W].[weeks2015_sofar] not found among its siblings
10532 at mondrian.resource.MondrianResource$_Def0.ex(MondrianResource.java:984)
10533 at mondrian.olap.Util.newInternal(Util.java:2536)
10534 at mondrian.rolap.SmartMemberReader$SiblingIterator.<init>(SmartMemberReader.java:471)
10535 at mondrian.rolap.SmartMemberReader.getLeadMember(SmartMemberReader.java:308)
10536 at mondrian.rolap.RolapSchemaReader.getLeadMember(RolapSchemaReader.java:507)
10537 at mondrian.olap.DelegatingSchemaReader.getLeadMember(DelegatingSchemaReader.java:165)
10538 at mondrian.olap.fun.ParallelPeriodFunDef.parallelPeriod(ParallelPeriodFunDef.java:159)
10539 at mondrian.olap.fun.ParallelPeriodFunDef$1.evaluateMember(ParallelPeriodFunDef.java:125)
10540 at mondrian.calc.impl.MemberArrayValueCalc.evaluate(MemberArrayValueCalc.java:63)
10541 at mondrian.rolap.RolapEvaluator.evaluateCurrent(RolapEvaluator.java:719)
10542 at mondrian.rolap.RolapResult.executeStripe(RolapResult.java:1016)
10543 at mondrian.rolap.RolapResult.executeStripe(RolapResult.java:1100)
10544 at mondrian.rolap.RolapResult.executeStripe(RolapResult.java:1100)
10545 at mondrian.rolap.RolapResult.executeBody(RolapResult.java:892)
10546 at mondrian.rolap.RolapResult.<init>(RolapResult.java:461)
10547 at mondrian.rolap.RolapConnection.executeInternal(RolapConnection.java:500)
10548 ... 7 more



When I don't use measures like [Measures].[Sales last year] using parallelperiod but only "regular" measures, I have no errors.
IS there a way to redefine the schma to avoid this and still be able to obtain column sums on the bottom for all measures?

Passing parameter value to Kettle transformation

$
0
0
Hello,

in my developed dashboard you have the possibility to change the value of an input field, e.g. the report month.
The value of this input field will pass to the linked parameter 'report_month'.
One of my defined datasource is a 'KETTLE query'.

Is it possible to pass the value of the parameter 'report_month' to this Kettle transformation so I can get this parameter value with 'Get variables' function in my Kettle transformation?
I've already tried to define my parameter 'report_month' as variable and parameter in the datasource but it doesn't work (see attached screenshots).
It was only passed my input '$(report_month)' to the kettle transformation and not the real value of 'report_month'.

I hope I could explain it fairly understandable and you can help me.

Thank you in advance and best regards!
Attached Images

scenario

Pentaho PDI Plugin help

$
0
0
Hi all!

My name is Mauricio and I'm from Uruguay. I'm new to pentaho PDI and new to kettle develpment.

I know I'm too new to the forums, but I'm in the urge of some help...

I have developed two plugins for kettle, they both work great and have no problems...

I recently developed a third one, as we need some special requeriments for a socket connection. The thing is, that spoon doesn't want to start, the step doesn't do anything yet, it only has the requiered methods for the interfaces of kettle.

I can't seem to find any problemas, no diferences with the already developed, at least no one I can tell... I can't find a way to see what is going on and why is failing at its start...

I can see the splash window for Spoon, and after that, it dissapear...

Please, I really appreciate any help. Thanks a lot.

Upgrade questions (running job in kitchen from zip file)

$
0
0
We are testing an upgrade from PDI 3.0 to 5.2.
On the same server one will run our job and one is giving a "Could not read from "zip:file:///usr/local/dwjobs/dwmain/build/dw_build_master.zip!/etl_dim_vendor_main_011.kjb" because it is a not a file." I have verified the file exists and of course it can be run by the 3.0 version.

I read and found some changes about the expected location of the simple-jdni folder. Were there any changes to the way to call a zip file for kitchen that may cause this?

The first job runs as expected I should say but the first job calls the job where we are getting the error, perhaps it has to do with the nested jobs?

the command line we are using is:
/usr/local/pentaho5/data-integration/kitchen.sh -file='zip:file:///usr/local/dwjobs/dwmain/build/dw_build_master.zip!dw_build_master_001.kjb' -log=/var/log/pentaho/dw_build_master.log -level=rowlevel;

Split Fields: how to ignore conversions errors?

$
0
0
First of all, I'm starting with Pentaho Kettle, so I'm sorry if the question sounds too basic/dumb.

I'm using the Split Fields tool to split a column in my database which has no default pattern. Let's say:

tracker value
10_california_red_banana_some_unuseful_data 100
182910mobile_version 10
87_nevada_blue_orange_ 150
90_arizona_green_grape_android 75
usa_111_countrydata 0

So my field splitter is set to identify the data in bold (delimiter _), which means: (1) integer; string; string; string.
As you can see, in the first and fourth case there are some extra data in my field I want to ignore - I guess the tool already does it when splitting the original field.
My problem is with lines 2 and 5 - since they don't respect my field splitter parameters, it gives an unexpected conversion error while converting value "insert value here" to an integer.

Is there an way I can configure the tool to "handle" these exceptions by ignoring or logging them somewhere else but without stopping the transformation process?

--
2015/06/15 15:33:09 - Tracker Split.0 - ERROR (version 5.3.0.0-213, build 1 from 2015-02-02_12-17-08 by buildguy) : Unexpected error
2015/06/15 15:33:09 - Tracker Split.0 - ERROR (version 5.3.0.0-213, build 1 from 2015-02-02_12-17-08 by buildguy) : org.pentaho.di.core.exception.KettleValueException:
2015/06/15 15:33:09 - Tracker Split.0 - Error converting value [12525J], when splitting field [tracker]!
2015/06/15 15:33:09 - Tracker Split.0 -
2015/06/15 15:33:09 - Tracker Split.0 - Unexpected conversion error while converting value [id] to an Integer
2015/06/15 15:33:09 - Tracker Split.0 -
2015/06/15 15:33:09 - Tracker Split.0 - id : couldn't convert String to Integer
2015/06/15 15:33:09 - Tracker Split.0 -
2015/06/15 15:33:09 - Tracker Split.0 - id : couldn't convert String to number : non-numeric character found at position 6 for value [12525J]
2015/06/15 15:33:09 - Tracker Split.0 -
2015/06/15 15:33:09 - Tracker Split.0 -
2015/06/15 15:33:09 - Tracker Split.0 -
2015/06/15 15:33:09 - Tracker Split.0 -
2015/06/15 15:33:09 - Tracker Split.0 - at org.pentaho.di.trans.steps.fieldsplitter.FieldSplitter.splitField(FieldSplitter.java:155)
2015/06/15 15:33:09 - Tracker Split.0 - at org.pentaho.di.trans.steps.fieldsplitter.FieldSplitter.processRow(FieldSplitter.java:176)
2015/06/15 15:33:09 - Tracker Split.0 - at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
2015/06/15 15:33:09 - Tracker Split.0 - at java.lang.Thread.run(Unknown Source)
2015/06/15 15:33:09 - Tracker Split.0 - Caused by: org.pentaho.di.core.exception.KettleValueException:
2015/06/15 15:33:09 - Tracker Split.0 - Unexpected conversion error while converting value [id] to an Integer
2015/06/15 15:33:09 - Tracker Split.0 -
2015/06/15 15:33:09 - Tracker Split.0 - id : couldn't convert String to Integer
2015/06/15 15:33:09 - Tracker Split.0 -
2015/06/15 15:33:09 - Tracker Split.0 - id : couldn't convert String to number : non-numeric character found at position 6 for value [12525J]
2015/06/15 15:33:09 - Tracker Split.0 -
2015/06/15 15:33:09 - Tracker Split.0 -
2015/06/15 15:33:09 - Tracker Split.0 -
2015/06/15 15:33:09 - Tracker Split.0 - at org.pentaho.di.core.row.value.ValueMetaBase.getInteger(ValueMetaBase.java:1780)
2015/06/15 15:33:09 - Tracker Split.0 - at org.pentaho.di.core.row.value.ValueMetaBase.convertData(ValueMetaBase.java:3537)
2015/06/15 15:33:09 - Tracker Split.0 - at org.pentaho.di.core.row.value.ValueMetaBase.convertDataFromString(ValueMetaBase.java:3771)
2015/06/15 15:33:09 - Tracker Split.0 - at org.pentaho.di.trans.steps.fieldsplitter.FieldSplitter.splitField(FieldSplitter.java:151)
2015/06/15 15:33:09 - Tracker Split.0 - ... 3 more
2015/06/15 15:33:09 - Tracker Split.0 - Caused by: org.pentaho.di.core.exception.KettleValueException:
2015/06/15 15:33:09 - Tracker Split.0 - id : couldn't convert String to Integer
2015/06/15 15:33:09 - Tracker Split.0 -
2015/06/15 15:33:09 - Tracker Split.0 - id : couldn't convert String to number : non-numeric character found at position 6 for value [12525J]
2015/06/15 15:33:09 - Tracker Split.0 -
2015/06/15 15:33:09 - Tracker Split.0 -
2015/06/15 15:33:09 - Tracker Split.0 - at org.pentaho.di.core.row.value.ValueMetaBase.convertStringToInteger(ValueMetaBase.java:1036)
2015/06/15 15:33:09 - Tracker Split.0 - at org.pentaho.di.core.row.value.ValueMetaBase.getInteger(ValueMetaBase.java:1718)
2015/06/15 15:33:09 - Tracker Split.0 - ... 6 more
2015/06/15 15:33:09 - Tracker Split.0 - Caused by: org.pentaho.di.core.exception.KettleValueException:
2015/06/15 15:33:09 - Tracker Split.0 - id : couldn't convert String to number : non-numeric character found at position 6 for value [12525J]
2015/06/15 15:33:09 - Tracker Split.0 -
2015/06/15 15:33:09 - Tracker Split.0 - at org.pentaho.di.core.row.value.ValueMetaBase.convertStringToInteger(ValueMetaBase.java:1028)
2015/06/15 15:33:09 - Tracker Split.0 - ... 7 more

CSS Styling CDE Analyzer Component

$
0
0
Does anybody know how to apply CSS styling to analyzer component within CDE dashboard?

Iframes are tricky enough to edit their styling but i have found that i can do the following jQuery in Chrome Console:

Code:

$("#iframe_analyzer-container").contents().find(".reportArea").css("background-color","blue");

However i cant get this to work with any of the following:

- pre/post execution events
- custom javascript snippet to wait for iframe load event

I think it is trying to update the element prior to it being fully loaded within the iframe, but not 100% sure.

If this fails does anybody have any good suggestions on how to achieve this? I cant get my head around how to do div integration of analyzer component within CDE.

Thanks as always!

Row Value

$
0
0
Hi,

Need help how to get below output using PDI.

Input :
id date1 value
111 2015-06-11 19
111 2015-06-12
111 2015-06-13 100
111 2015-06-14
111 2015-06-15
111 2015-06-16
112 2015-06-13
112 2015-06-14 105
112 2015-06-15
112 2015-06-16


















Ouput :

id date1 value
111 2015-06-11 19
111 2015-06-12 19
111 2015-06-13 100
111 2015-06-14 100
111 2015-06-15 100
111 2015-06-16 100
112 2015-06-13
112 2015-06-14 105
112 2015-06-15 105
112 2015-06-16 105

Connecting to SAP HANA from Pentaho Enterprise Edition

$
0
0
Hi Guys,

I've been trying to connect to SAP HANA database from Pentaho Enterprise Edition trial version without any success.

I've already gone through the below link:

http://scn.sap.com/community/develop...ng-pentaho-pdi

But this doesn't solve my purpose. And I also want to know how this Java Environment setup has been done and if this has to be done with the EE also.

Could any of you please help here?



Thanks,
Prasanna

Getting error while loading data in Oracle Db.

$
0
0
Please help,

Error Occur during the table output step.

Database connection is getting closed after first commit.


when I first gave 1000 as commit interval, 1000 records has been loaded.


when I increased to 5000, 5000 records were loaded.

Please see the attached log.

snapshot of Log:

2015/06/15 16:25:38 - UK_3 - Connected to database.
2015/06/15 16:25:38 - Load UKIA_2.0 - Connected to database [UK_3] (commit=1000)
2015/06/15 16:25:38 - UK_3 - Auto commit off
2015/06/15 16:25:38 - UKIA_0 - Step [Get rows from result.0] initialized flawlessly.
2015/06/15 16:25:38 - UKIA_0 - Step [Load UKIA_2.0] initialized flawlessly.
2015/06/15 16:25:38 - UKIA_0 - Transformation has allocated 2 threads and 1 rowsets.
2015/06/15 16:25:38 - Get rows from result.0 - Starting to run...
2015/06/15 16:25:38 - Load UKIA_2.0 - Starting to run...
2015/06/15 16:25:38 - Load UKIA_2.0 - Prepared statement : INSERT INTO UKIA_2 (PTCABS, CLIENT, CASENO, ID_NUM, FRAUDDTE, CATEGORY, ADD_INFO, LOADDATE, OPERATOR_ID, CONFIRMED_IND, ARCHIVE_DATE, TRANSACTION_TYPE, VARIABLE_DATA, TITLE_NARR, FIRST_NAME, SECOND_NAME, SURNAME, ORIG_ADDR_LINE_1, ORIG_ADDR_LINE_2, ORIG_ADDR_LINE_3, ORIG_ADDR_LINE_4, ORIG_ADDR_LINE_5, ORIG_POSTCODE, LNK_HOUSE_OCC_COUNT, LINK_PTCABS, LINK_INF_IND, MULTI_ADDR_IND, TIMESTAMP, FRAUD_CATEGORY, EXTRACT_IND, REFILING_IND, COMPANY_NUMBER, COMPANY_NAME, SUB_CAT_GRP_1, SUB_CAT_GRP_2, SUB_CAT_GRP_3, SUB_CAT_GRP_4, SUB_CAT_GRP_5, DATE_OF_BIRTH, PRODUCT_CODE, SECTION_29_FLAG, CASE_TYPE, SUBJECT_ROLE, SUBJECT_ROLE_QUALIFIER, CASEID, NAMEKEYA, NAMEKEYB, NAMEKEYC, NAMEKEYD, NAMEKEYE, NAMEKEYF, NAMEKEYASTD, NAMEKEYCSTD) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
2015/06/15 16:25:38 - UK_3 - Commit on database connection [UK_3]
2015/06/15 16:25:38 - UK_3 - Rollback on database connection [UK_3]
2015/06/15 16:25:38 - UKIA_0 - ERROR (version 5.0.1-stable, build 1 from 2013-11-15_16-08-58 by buildguy) : Errors detected!
2015/06/15 16:25:38 - Load UKIA_2.0 - ERROR (version 5.0.1-stable, build 1 from 2013-11-15_16-08-58 by buildguy) : Because of an error, this step can't continue:
2015/06/15 16:25:38 - UKIA_0 - ERROR (version 5.0.1-stable, build 1 from 2013-11-15_16-08-58 by buildguy) : Errors detected!
2015/06/15 16:25:38 - Get rows from result.0 - Finished processing (I=0, O=0, R=11901, W=11901, U=0, E=0)
2015/06/15 16:25:38 - Load UKIA_2.0 - ERROR (version 5.0.1-stable, build 1 from 2013-11-15_16-08-58 by buildguy) : org.pentaho.di.core.exception.KettleException:
2015/06/15 16:25:38 - Load UKIA_2.0 - Error inserting row into table [UKIA_2] with values: [E080], [BOAO0010], [E0845E], [ 603], [], [Y], [CIFS0140], [], [I], [], [], [CHATTA], [], [QASIARMEHMOOD], [THE CORE], [], [COUNTY WAY], [], [BARNSLEY], [S70 2JW], [ 1], [E080], [A], [], [], [], [], [ 0], [], [], [], [], [], [], [UKBA], [], [CFR], [S1A], [AL1], [3237413], [2014/04/28 00:00:00.000], [2014/04/28 00:00:00.000], [2017/04/28 00:00:00.000], [1974/08/13 00:00:00.000], [CHATTAQASIARMEHMOOD], [CHTTQSRMHMD], [CHATTA], [CHTT], [C], [C], [null], [null]
2015/06/15 16:25:38 - Load UKIA_2.0 -
2015/06/15 16:25:38 - Load UKIA_2.0 - Unexpected error inserting row
2015/06/15 16:25:38 - Load UKIA_2.0 - -32233
2015/06/15 16:25:38 - Load UKIA_2.0 -
2015/06/15 16:25:38 - Load UKIA_2.0 -
2015/06/15 16:25:38 - Load UKIA_2.0 - at org.pentaho.di.trans.steps.tableoutput.TableOutput.writeToTable(TableOutput.java:445)
2015/06/15 16:25:38 - Load UKIA_2.0 - at org.pentaho.di.trans.steps.tableoutput.TableOutput.processRow(TableOutput.java:128)
2015/06/15 16:25:38 - Load UKIA_2.0 - at org.pentaho.di.trans.step.RunThread.run(RunThread.java:60)
2015/06/15 16:25:38 - Load UKIA_2.0 - at java.lang.Thread.run(Thread.java:744)
2015/06/15 16:25:38 - Load UKIA_2.0 - Caused by: org.pentaho.di.core.exception.KettleDatabaseException:
2015/06/15 16:25:38 - Load UKIA_2.0 - Unexpected error inserting row
2015/06/15 16:25:38 - Load UKIA_2.0 - -32233
2015/06/15 16:25:38 - Load UKIA_2.0 -
2015/06/15 16:25:38 - Load UKIA_2.0 - at org.pentaho.di.trans.steps.tableoutput.TableOutput.writeToTable(TableOutput.java:341)
2015/06/15 16:25:38 - Load UKIA_2.0 - ... 3 more
2015/06/15 16:25:38 - Load UKIA_2.0 - Caused by: java.lang.ArrayIndexOutOfBoundsException: -32233
2015/06/15 16:25:38 - Load UKIA_2.0 - at oracle.jdbc.driver.OraclePreparedStatement.setupBindBuffers(OraclePreparedStatement.java:2677)
2015/06/15 16:25:38 - Load UKIA_2.0 - at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:9255)
2015/06/15 16:25:38 - Load UKIA_2.0 - at oracle.jdbc.driver.OracleStatementWrapper.executeBatch(OracleStatementWrapper.java:210)
2015/06/15 16:25:38 - Load UKIA_2.0 - at org.pentaho.di.trans.steps.tableoutput.TableOutput.writeToTable(TableOutput.java:315)
2015/06/15 16:25:38 - Load UKIA_2.0 - ... 3 more
2015/06/15 16:25:38 - Load UKIA_2.0 - Signaling 'output done' to 0 output rowsets.
2015/06/15 16:25:38 - UK_3 - Commit on database connection [UK_3]
2015/06/15 16:25:38 - Load UKIA_2.0 - Stopped while putting a row on the buffer
2015/06/15 16:25:38 - Load UKIA_2.0 - Stopped while putting a row on the buffer
2015/06/15 16:25:38 - Load UKIA_2.0 - Stopped while putting a row on the buffer
2015/06/15 16:25:38 - Load UKIA_2.0 - Stopped while putting a row on the buffer
2015/06/15 16:25:38 - Load UKIA_2.0 - Stopped while putting a row on the buffer

Viewing all 16689 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>