Quantcast
Channel: Pentaho Community Forums
Viewing all 16689 articles
Browse latest View live

PDI 5.3 not processing kettle.properties file variables

$
0
0
Hi,

When i start spoon in PDI5.3 it is not automatically logging in to my repository from the connection values in kettle.properties file and it is not displaying
any values from the kettle variables in this file either.

in my older version 3.2 I have no problems coding kettle variables and using the values for my etl processing.

Has anything changed with newer versions?

Thanks for any help.

Does WEKA only work for specific PMML versions?

$
0
0
I would like to use WEKA to score a model I've been given in PMML version 4.2. I'm trying to test out WEKA, just making a single PMMLFactory.getPMMLModel call, and when I use my PMML file I get this exception:
weka.core.UnassignedClassException: Class index is negative (not set)!

I've found some sample PMML files online and my code is able to read them with no exceptions. These are versions 3.2 and 4.1, so I am wondering if WEKA does not support PMML version 4.2? I have not found other 4.2 files to test with.

Thanks!

BI Server - 5.4.0.1 connect to MSSQL 2008R2 Express - ConnecitonImpl.ERROR_0009

$
0
0
Hi All,

I am new to using Pentaho Solution. I am exploring Pentaho CE BI Server solution, I want to create report from my local database MSSQL 2008R2 Express edition. I am login to BIA user console and creating new connection using MS SQL (native) and JDBC (native) data types. I have download MS SQL JDBC 4.0 and 4.1 both and use both alternatively but every a time I am getting the same error - Connection impl.ERROR-0009 , connect to DB [master] failed.

I have copied jdbc4 to tomcat/lib directory and restart BIA server but same error. Can any one help me here to overcome this error? It would be great help for me.


Thanks,

Swati

Prepared statement contains too many variables

$
0
0
Hi,

I am using below details to process my etl, trying to Load data for 40 columns.. I am getting error like Prepared statement contains too many placeholders @ Table Output step.

Kettle : PDI-CE-5.3 File Repository,
Database : MySQL - InnoDb,
java 1.8.

Actually i enabled the option "Use batch update for inserts" on Table output step with commit size as 2000(hope 2000 is sufficient commit size). If i disabled then my etls are running fine. but if i enabled getting error. Totally i have just 9000 records in my source. most of the developers said when we enabling the "Use batch update for inserts" performance will be better. how can I achieve this issue with enabling this option, could you please help me.

Thank you

Report Header

$
0
0
Hi,

Just a quick question.

Can we have 2 Report Header in 1 report in Pentaho Report Designer?



Thanks

Restrict Data for Users

$
0
0
Is it possible to restrict specific user to only see their relevant data, for example:

If we have three users EuropeManager, AsiaManager, and GlobalManager
and the data model below and we want to restrict what each user can see across the whole dashboard as follow:

EuropeManager to see only Europe and other respective information to Europe
AsiaManager to see only Asia and other respective information to Asia
GlobalManager to see all the data without resctriction


REGION Sales
Europe 2,000
Asia 3,000
America 4,000


Kind Regards,

Alejandro

Community Edition Dashboard shows nothing

$
0
0
I am facing issues with the making of dashboards, i need to put aggregated values from mysql database, i have made query and the connection is also successful, but no charts that i use work/visible on the dashboard. i have tested the values returned via query using CDA as well. Also what component is used to show aggregated values likes total sales etc. please help me, if this thread does not belongs to this section, do suggest me the correct one.

Waiting desperately for responses. :(:(:(

Shahzad

How to get the transactional data from mondrian for the mdx query in C#

$
0
0
Hi Team,

I have configured the xml file to define the cubes, dimensions and measures and able to retrieve the pivot data from my C# application through AdomdDataAdapter by sending the MDX query to mondrian application.

Now I want to retrieve the transactional data (raw data from the data warehouse database) for that MDX query through mondrian along with the pivot data. How to achieve this.

I found a below link, which converts the mdx query to select statement for mondrian.http://wiki.openbravo.com/wiki/Modul...ction#Mondrian

How to do this in mondrian itself ?
Thank you.

Regards
Vishwanath

How to get the transactional data from mondrian for the mdx query in C#

$
0
0
Hi Team,

I have configured the xml file to define the cubes, dimensions and measures and able to retrieve the pivot data from my C# application through AdomdDataAdapter by sending the MDX query to mondrian application.

Now I want to retrieve the transactional data (raw data from the data warehouse database) for that MDX query through mondrian along with the pivot data. How to achieve this.

I found a below link, which converts the mdx query to select statement for mondrian.http://wiki.openbravo.com/wiki/Modul...ction#Mondrian

How to do this in mondrian itself ?
Thank you.

Regards
Vishwanath

Visualise

$
0
0
I am new to Pentaho, I am trying to visualise an output I have created in spoon, but I keep getting the following error "Invalid transformation step or job entry selected", do I need an specific step to be able to visualise my data?

Tranformation usually gets stuck

$
0
0
Hi:

I am running Kettle 4.3. I have a transformation with 10 streams running from a source SQL Server Input Tables to 10 SQL Server output tables, (with JDBC connections on either end) with a "Select fields" step in between. Whether I run from Spoon or run from a batch file, usually it gets stuck at some random point, after completing one of the table output steps (it can be a different one each time). It fails to kick off the other table input steps. Sometimes the transformation runs to completion.

I don't really want to attach the transformation to this thread as there is some information in it I don't want to make public. Could you just let me know what I should look for to troubleshoot or settings I can try changing?

Cheers,
B

Save Report with Parameters

$
0
0
I have a parameterized report which is in a public directory. I want users to be able to run the report with parameters they select, and then perform a save as to it to their own folder. How would I enable this? If I run the report and the go to File -> Save As, the option is grayed out. Any suggestions?

Can't start BI

$
0
0
Hi

after starting the bat file :

C:\biserver-ce>start-pentaho.bat

I get that response:


DEBUG: Using PENTAHO_JAVA_HOME
DEBUG: _PENTAHO_JAVA_HOME=C:\Program Files (x86)\Java\jre1.8.0_51
DEBUG: _PENTAHO_JAVA=C:\Program Files (x86)\Java\jre1.8.0_51\bin\java.exe
Using CATALINA_BASE: "C:\biserver-ce\tomcat"
Using CATALINA_HOME: "C:\biserver-ce\tomcat"
Using CATALINA_TMPDIR: "C:\biserver-ce\tomcat\temp"
Using JRE_HOME: "C:\Program Files (x86)\Java\jre1.8.0_51"
Using CLASSPATH: "C:\biserver-ce\tomcat\bin\bootstrap.jar"

Then a window opens and closes immediately. The tomcat\logs directory contains only a desktop.ini file. Are there any logfiles that I can check to understand what went wrong?

Any help would be much appreciated.

Do measures really need to be defined??

$
0
0
Hello,

I have a question as a new user. Having used pivot tables and other B.I. software I have to ask: Does a measure really have to be defined for every single aggregate type for every column in the mondrian schema if you wanted to have all of those as an option in your cube? For example, if "sales dollars" is your column from the source table you want to use, isn't there a way in the drag and drop editor to just drop "sales dollars" into the cube and select your aggregate type right then and there in the cube itself (sum, max,min,etc)? Just as you would in an Excel pivot table?

Issue with get file row count

$
0
0
Hi , Getting below error when i specify File/Dir as ${OUTPUT_FILE_LOCATION} and wildcard as ${OUTPUT_FILE_NAME}.csv or ${OUTPUT_FILE_NAME}.*.\csv$ .
There are multiple .csv files in given directory and also file with ${OUTPUT_FILE_NAME}.excel , ${OUTPUT_FILE_NAME}.txt .

I would like to get count of file which has name exactly ${OUTPUT_FILE_NAME}.csv

2015/08/09 16:04:31 - Write_Final_File - Dispatching started for transformation [Write_Final_File]
2015/08/09 16:04:31 - Get Files Rows Count.0 - ERROR (version 4.3.0-stable, build 16786 from 2012-04-24 14.11.32 by buildguy) : No file(s) specified! Stop processing.
2015/08/09 16:04:31 - Get Files Rows Count.0 - ERROR (version 4.3.0-stable, build 16786 from 2012-04-24 14.11.32 by buildguy) : Error initializing step [Get Files Rows Count]
2015/08/09 16:04:31 - Write_Final_File - ERROR (version 4.3.0-stable, build 16786 from 2012-04-24 14.11.32 by buildguy) : Step [Get Files Rows Count.0] failed to initialize!
2015/08/09 16:04:31 - Write Final File - ERROR (version 4.3.0-stable, build 16786 from 2012-04-24 14.11.32 by buildguy) : Unable to prepare for execution of the transformation
2015/08/09 16:04:31 - Write Final File - ERROR (version 4.3.0-stable, build 16786 from 2012-04-24 14.11.32 by buildguy) : org.pentaho.di.core.exception.KettleException:
2015/08/09 16:04:31 - Write Final File - ERROR (version 4.3.0-stable, build 16786 from 2012-04-24 14.11.32 by buildguy) : We failed to initialize at least one step. Execution can not begin!
2015/08/09 16:04:31 - Write Final File - ERROR (version 4.3.0-stable, build 16786 from 2012-04-24 14.11.32 by buildguy) :


streamTransformation.jpg

Any suggestions ?

Kettle Version : 4.3

Thanks.
Attached Images

Help with a transformation

$
0
0
I have this case of denormalization but i don´t know how to do it. If someone could help me i´ll be very apreciated.

ID PRODUCT ID STORE
P-1 S-1
P-1 S-3
P-1 S-5
P-2 S-7
P-2 S-1
P-3 S-8

And i want just one row for each ID PRODUCT and all the ID STORE concatenated in one cell like this...

ID PRODUCT STORES
P-1 S-1,S-3,S-5
P-2 S-7,S-1
P-3 S-8


Thanks in advance

Closed connection while handling error scenarios

$
0
0
Hi,

I have created a transformation which copies 760 records from table A, does few transformations and insert those records into Table B.I have also an error handling defined for Table B which updates a flag of Table A to "Y" in case of error.
For all correct records the transformations is working fine. For erroneous scenarios, the above error handling works fine if there are total 11 error records.
But if the error records are more than 12, the above transformation fails due to "Closed connection".

I have attached the error log for details.

override variables with parameters

$
0
0
Hi,

We have build a (library) job which we need to call in 2 ways:

1) from a wrapper
2) dynamically from other jobs

the jobs uses parameters which are needed to run the job. The default values of the parameters are variables which can be set in the properties file. These default values are set in the JOB settings.

ad 1):
the wrapper reads a property file which sets all the properties as variables. When we run the job through the wrapper all works as expected.

ad 2):
When we run the job dynamically from other jobs it doesn't seem to pass through the values of the parameters. The values are set within tab 'Parameters' of the JOB step within a job. When we run it this way it doesn't seem to pass the values. We tested a bit and it looks like the values of the parameters will be given the name of the property.

parameter name default value dynamically value
test ${test.variable} field 'test' from a transformation

In the above example, the ${test} variable in the job will have the value "${test.variable}" instead of the value of the field 'test' from the parent transformation.

Can anybody help us with this?

Oracle Drivers in PDI

$
0
0
When I try to connect to Oracle, I get this error: Driver class 'oracle.jdbc.driver.OracleDriver' could not be found, make sure the 'Oracle' driver (jar file) is installed.oracle.jdbc.driver.OracleDriver
Is the generic database connectivity not working in this version PDI 5.4.0.1-130?

Uploading Data Sources to User Console

$
0
0
Hello all,

I am currently in the process of migrating my company from version 4.8 to 5.3 of Pentaho EE.
After using the migration tool included with 5.3, I noticed that none of my existing data sources for Interactive Reports transferred over from 4.8. I did some investigating and discovered that the .xmi files were living in the \biserver-ee\pentaho-solutions\admin\resources directory for 4.8, which the migration tool does not touch.
It looks like I can upload each of these .xmi files one at a time to the user console in 5.3, but seeing as I have hundreds of .xmi files I was wondering if anyone knew of a better way of getting this data in to the repository?
Any help or suggestions are appreciated!

Thank you,

Jenna
Viewing all 16689 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>