Quantcast
Channel: Pentaho Community Forums
Viewing all 16689 articles
Browse latest View live

Pentaho fordítás

$
0
0
Valaki tud segíteni milyen programmal ajánlott fordítani és ,hogy működik?

köszönöm.

Bad performance in Swing PreviewDialog

$
0
0
Hello,
I'm experiencing very bad performance of the reporting engine embedded in a Java application using the Swing PreviewDialog.
The reports I'm generating aren't that big (not more than 1500 rows). But the layouting process takes a lot of time - I think something is configured wrong on my side ..
I noticed that my EHCache is not configured correctly because the Console prints something like 'Failed to create data cache: null' and 'Unable to create valid cache, returning <null>'
I tried to place the ehcache.xml file somewhere in the project, but this had no effect. Could the EHCache be the reason?
I've read a lot about inline subreports and all those general performance tips - so I think I'm following all the main points.
I hope there is some kind of lazy loading option to tell the PreviewDialog just to display the first page and calculate all the other pages in the background.
Switching from page to page also takes some seconds..

Any one an idea?
Thanks!!

Chart problem

$
0
0
Hi I have a fact table which has two FK to one dimensional table which refers to the immigrant and emigrant.

In which way I can design a MDX query which allow to show a PieChart with the percentages?

I tried with the following one, I didn't obtain any result:

Quote:

"with "+
"member [Measures].[NUM_IMMIGRATI] as 'Sum(EXCEPT({[DIM_FLUSSO_MIGRATORIO_INTERNO.LUOGO_FLUSSO_INTERNO].Children}, {[DIM_FLUSSO_MIGRATORIO_INTERNO.LUOGO_FLUSSO_INTERNO].[#null]}), [Measures].[COUNT_PERSONE])' "+
"member [Measures].[NUM_EMIGRATI] as 'Sum(EXCEPT({[DIM_FLUSSO_MIGRATORIO_ESTERNO.LUOGO_FLUSSO_ESTERNO].Children}, {[DIM_FLUSSO_MIGRATORIO_ESTERNO.LUOGO_FLUSSO_ESTERNO].[#null]}), [Measures].[COUNT_PERSONE])' "+
"member [Measures].[TOTALE] as '([Measures].[NUM_IMMIGRATI] + [Measures].[NUM_EMIGRATI])' "+
"select "+
"NON EMPTY{ "+
"[Measures].[NUM_IMMIGRATI], "+
"[Measures].[NUM_EMIGRATI], "+
"[Measures].[TOTALE]} ON COLUMNS, "+
"NON EMPTY{[DIM_PROFESSIONE].[ALL].Children} ON ROWS "+
"from [Anagrafe] "+
"where ([DIM_STATO_RECORD].All) ";
I need your help please... THANKS

How to map fields?

$
0
0
I am using the Table Output node to write to a table as a result of a Join Rows node. If the fields are named the same, I have no problems. But, I have problems if I have field named "A" that I want to write to field named "B" in the output table.

What I did was to go to Table Output -> Database Fields -> Enter Field Mapping, and then I selected "A" from Source Fields and "B" from "Target Fields" and clicked Add, which put them into the mappings. I then clicked Guess to put the rest of the fields with the same names into the Mappings. However, I get two errors:
  • For field A I get "Fields in input stream, not found in output table: A (integer)"
  • For field B I get "Fields in table, not found in input stream: B (integer)"


What is wrong?

use variable for password in Mail step

$
0
0
Hi:

I am trying to send mail via Mail step loading password from a variable.
It's not working when I use encrypted password via kettle.properties.
Interestingly it works when I use the actual password as value for variable.

I used encr.sh to create a encrypted password.
ref: http://wiki.bizcubed.com.au/xwiki/bi...and+properties

Any idea why it's not taking encrypted password? Thanks!

Year Format Changed

$
0
0
I commonly include dates in filename (did not need to use specify format option) when I save files as with using the Text file step. This week I noticed that the date format changed from yyyyMMdd to yyMMdd. I checked my regional settings have not changed and show the correct format. It was working until one day this week.

Any ideas how to change this back to yyyyMMdd?

Thank you
Ray

Is CDC project alive?

$
0
0
Based on questions in this forum and also my experience it doesn't work with Pentaho 4.8.0, mondrian 3.5 and even CDA. Hazelcast cache simply isn't used. Trying to manually configure Mondrian to use it leads to all kind of ClassNotFound exceptions.
Relevant questions without answers on this forum are

http://forums.pentaho.com/showthread...not-being-used
http://forums.pentaho.com/showthread...External-Cache

Can somebody provide more information if it's possible to make it working with Pentaho 4.8.0?

Sqoop import fails with Mapr

$
0
0
I have configured Kettle-Spoon 4.4 to use MAPR hadoop distribution. When I tried executing a Pentaho job with Sqoop import, I'm getting below error. Any help will really be helpful.

by buildguy) : Error running Sqoop
2013/07/27 19:32:28 - Sqoop Import - ERROR (version 4.4.0-stable, build 17588 from 2012-11-21 16.02.21 by buildguy) : java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: Invalid prefix in maprfs://labmapr5:50030 expecting maprfs:///
2013/07/27 19:32:28 - Sqoop Import - ERROR (version 4.4.0-stable, build 17588 from 2012-11-21 16.02.21 by buildguy) : at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:186)
2013/07/27 19:32:28 - Sqoop Import - ERROR (version 4.4.0-stable, build 17588 from 2012-11-21 16.02.21 by buildguy) : at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
2013/07/27 19:32:28 - Sqoop Import - ERROR (version 4.4.0-stable, build 17588 from 2012-11-21 16.02.21 by buildguy) : at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:49)
2013/07/27 19:32:28 - Sqoop Import - ERROR (version 4.4.0-stable, build 17588 from 2012-11-21 16.02.21 by buildguy) : at org.pentaho.hadoop.shim.common.CommonSqoopShim.runTool(CommonSqoopShim.java:44)
2013/07/27 19:32:28 - Sqoop Import - ERROR (version 4.4.0-stable, build 17588 from 2012-11-21 16.02.21 by buildguy) : at org.pentaho.di.job.entries.sqoop.AbstractSqoopJobEntry.executeSqoop(AbstractSqoopJobEntry.java:259)
2013/07/27 19:32:28 - Sqoop Import - ERROR (version 4.4.0-stable, build 17588 from 2012-11-21 16.02.21 by buildguy) : at org.pentaho.di.job.entries.sqoop.AbstractSqoopJobEntry$1.run(AbstractSqoopJobEntry.java:231)
2013/07/27 19:32:28 - Sqoop Import - ERROR (version 4.4.0-stable, build 17588 from 2012-11-21 16.02.21 by buildguy) : at java.lang.Thread.run(Thread.java:662)
2013/07/27 19:32:28 - Sqoop Import - ERROR (version 4.4.0-stable, build 17588 from 2012-11-21 16.02.21 by buildguy) : Caused by: java.lang.RuntimeException: java.io.IOException: Invalid prefix in maprfs://labmapr5:50030 expecting maprfs:///


Capture1.PNG
Attached Images

Mapping step and flows

$
0
0
Hi,

I´m testing this step in order to call my ktr files as modules in my general transformation.

In one of my ktrs, I have to use a "Memory Group By" step wich aggregates all rows produced in this sub-mapping module and returns just 1 row.

Problem is that main transformation is affected by that "Memory Group By" step in the mapping-module, as it is just removing all non declared output fields.

My question is: is there any way to use the Mapping step in Pentaho as a simple Procedure, in the sense of not affect the main transformation data flow? That is to say: just a black box which receives n parameters and returns n+1 without affecting the main input parameters and main flow.

Thank you for your help.

Regards

Creating Star Shema in PDI 4.4.0

$
0
0
Hi everyone,

I'm new to to ETL tools at all, and of course to pentaho.
Please, I need to make fact table and few dimension tables.
Ok, i did it many time trough scheduled database queries, but I think PDI will keep me well organized so I have complete ETL progress on one place.
So please, could you introduce me a bit?
For example, is there a way to make two-tree table inputs,one excel input, transform data, and load in another table.
I like to see which components you will use for this.
So, first step is to read 2-3 tables from DIFFERENT data source-s, transform it, merge it with excel table,execute some SQLs ,and put in another table.
I have to do cca 15 similar jobs, than to start the main job (making fact table).

Thanks in advance,

File input

$
0
0
I am new to ETL and not sure how to do this. Have an input csv file on my desktop which i use in my transformation. My input step in the transformation is text file input where by i browse to the location and physically add it in .

My question is if there are changes made to the input file on the desktop how do i get the changes reflected in my transformation w/o browsing to the location. I need to schedule the transformation to run every month and be automated.

Get a file with SFTP

$
0
0
Hi,

I want to get a file from ftp server and paste it in a local directory/network path by running the job in server . is it possible to do? i ran the job in local then it is possible to paste in the local directory/network path but if i do the same in server i cant able to do . kindly help me to solve this issue


Thanks in advance

Visualize Named Set on Saiku

$
0
0
Hi all,
i have defined a named set on my schema, using the Schema Workbench.
However, they didn't display on saiku analyzer web-interface.

Is there a way to do this?

Thanks in advance.
Marco

Upcoming Books & Magazine for Pentaho ?

$
0
0
Hi,

Would like to know if there are any upcoming books and magazine for Pentaho BI Suite or PDI ?

At least I would like to know upcoming books for PDI.

Hari

design studio Workspace Info file

$
0
0
Hi All,


when ever i start my design studio this shows me the list of workspace, can some one say which file holds the info
or
I have two workspaces for my design studio ,Which helps me to swith from one workspace to other workspace .!
can some one help me to know which file stores workspace path info .


Thanks,
Surya

How to write the HQL to get the records from one specific file for one table?

$
0
0
Hi, hive experts
When I import data into HIVE, I used the component "Hadoop File Output" to load the data into hdfs://cloudera:cloudera@135.252.31.26:8020/user/hive/warehouse/user/abc.
Note: 1.The option "included date in the file name?" is yes.
2. user is the table name that I created in hive.
So when I run this *.ktr file everyday, I can find a new file named like abc_130723 is generated on /user/hive/warehouse/user/ via checking "hadoop fs - /user/hive/warehouse/user, and the records are readlly added into hive when select * from user.
Now, I only want to read the records in someday like the abc_130724. Can you share me how to write the HQL?Thanks/Maria

Passing date parameter to Google analytics step

$
0
0
Hi All,

I need to pass the Date parameter to the "Start date" and "End date" field in Google Analytics plugin.

I tried to set start/End Date from "Get System Info" and also with help of Modified Java Script" to convert date to string. and pass it to the set variable step..

in next transformation I have get Variable step and Google analytics Plugin, which has ${start_date} and ${end_Date} parameter in it input fields.

But when I run the job, It provides the error saying "invalid date format".
I also tried without converting the date filed to string in get sytem info.

Please let me know where I am missing.
It will be a great help.

Thanks,
Sripada Ravindranath

Problem when deploying reports in Prod

$
0
0
hello,
I try to deploy my reports in prod envirement, so i created a JNDI ORACLE connection in my report, and then a datasource in pentaho BI server with same name.
When i put my report in pentaho-solution folder. I get a report without data ( SQL queries to oracle database) and i get this errors in log file

ERROR [org.pentaho.platform.web.servlet.GwtRpcPluginProxyServlet] GwtRpcPluginProxyServlet.ERROR_0003 - RPC invocation on target service org.pentaho.platform.dataaccess.datasource.wizard.service.DataAccessDatabaseConnectionService failed.
com.google.gwt.user.client.rpc.IncompatibleRemoteServiceException: This application is out of date, please click the refresh button on your browser. ( Could not locate requested method 'getDatabaseTypes()' in interface 'org.pentaho.ui.database.gwt.IGwtDatabaseConnectionService' )
at com.google.gwt.user.server.rpc.RPC.decodeRequest(RPC.java:303)

any ideas??? thanks

Create Expression on a Dimension

$
0
0
Hi all,
i have a question. I want to extract from a table only a subset of values.
I have mobile-phone models on this table and i want to select someones of these.
I tried to write on schema workbench a key expression, but it not works. It returns me all the mobile models.

here is the sql expression:

test_devices.model in ('GT-I9305','GT-I9210','GT-P7320','LG-P936','C6603',
'GT-N7105',
'PadFone 2',
'LT25i',
'GT-N8020',
'LG-E975',
'GT-I8730',
'ASUS Transformer Pad TF300TL',
'HTC Holiday',
'HTC One XL',
'SmartQT10',
'C5303',
'XT925',
'GT-I9505',
'HTC One',
'LG-P875',
'HTC One SV',
'GT-N5120',
'iPhone5',
'iPad4',
'iPad Mini',
'HUAWEI P2-6011',
'SGP321',
'U9202L-1',
'GT-I9205',
'C6503',
'LG-F240S',
'PadFone Infinity',
'ME302KL',
'SM-N9005',
'Vodafone Smart 4G',
'GT-I9295',
'Nexus 7',
'GT-I9195',
)



Any ideas?

Yamas,
Marco

editFile in CDA

$
0
0
Hi,
I'm tring CDA and especially the editFile function. When I call the url ../content/cda/editFile?solution=plugin-samples&path=cda_cineca&file=test_cineca_sql-jndi.cda I get a black window in the browser page.

See you editfile_error.jpg

Can you help me ?

Thanks
Giovanni
Attached Images
Viewing all 16689 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>