Quantcast
Channel: Pentaho Community Forums
Viewing all 16689 articles
Browse latest View live

Hive JDBC Query too slow: too many fetches after query execution: Kettle Xform

$
0
0
Here is how the balances are different between the reports.

I have setup a kettle tranform with only one step "Table Input" that fires a query on a Hive table.Attach hive job screenshot.
KettleJob.jpg

The connection sets up fine (except that it only connects to default database. Known issue, there is a jira on it). Attached connection screenshot.
KettleHiveConnection.jpg

It works fine but is very slow. I checked the hiveserver2 logs and looks like the query executes really fast but after that there are a lot fetchresults executed and it takes forever for the data to make it back. Its fine if the resultset is small but for ~300000 rows it takes like 10 mins. We are using Cloudera4.5 (hence the bigdata plugin for kettle cdh42), out network is top of the line. Went I execute the same query on Hue(CDH query portal) on my desktop, it is very fast to return the same resultset, no fetchresults are seen in the logs.

I checked the Hive-JDBC code for cdh42. Looks like the fetchsize is hardcoded to 50. And I can't figure out how and if I can send it down as a parameter.
https://github.com/pentaho/hive/blob...Statement.java

I am conjecturing that fetchsize is the issue but cannot say for sure since I could not figure if I have the right piece of code above and how I can build it. It is not setup like the other pentaho projects in github.
Can someone please help me out? Everything about the setup works fine and I am getting killed in performance. Thanks!!!

Hiverserver2 log for reference below:
5:46:24.270 PM INFO org.apache.hadoop.hive.ql.exec.Task
2014-02-20 17:46:24,265 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 85.06 sec
5:46:25.285 PM INFO org.apache.hadoop.hive.ql.exec.Task
2014-02-20 17:46:25,281 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 85.06 sec
5:46:26.301 PM INFO org.apache.hadoop.hive.ql.exec.Task
2014-02-20 17:46:26,296 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 85.06 sec
5:46:26.303 PM INFO org.apache.hadoop.hive.ql.exec.Task
MapReduce Total cumulative CPU time: 1 minutes 25 seconds 60 msec
5:46:26.322 PM INFO org.apache.hadoop.hive.ql.exec.Task
Ended Job = job_201402072147_0180
5:46:26.330 PM INFO org.apache.hadoop.hive.ql.exec.FileSinkOperator
Moving tmp dir: hdfs://nameservice1/tmp/hive-hive/hive_2014-02-20_17-45-52_736_1233364556268491863-2/_tmp.-ext-10001 to: hdfs://nameservice1/tmp/hive-hive/hive_2014-02-20_17-45-52_736_1233364556268491863-2/_tmp.-ext-10001.intermediate
5:46:26.338 PM INFO org.apache.hadoop.hive.ql.exec.FileSinkOperator
Moving tmp dir: hdfs://nameservice1/tmp/hive-hive/hive_2014-02-20_17-45-52_736_1233364556268491863-2/_tmp.-ext-10001.intermediate to: hdfs://nameservice1/tmp/hive-hive/hive_2014-02-20_17-45-52_736_1233364556268491863-2/-ext-10001
5:46:26.346 PM INFO org.apache.hadoop.hive.ql.Driver
</PERFLOG method=Driver.execute start=1392936353985 end=1392936386346 duration=32361>
5:46:26.346 PM INFO org.apache.hadoop.hive.ql.Driver
MapReduce Jobs Launched:
5:46:26.347 PM INFO org.apache.hadoop.hive.ql.Driver
Job 0: Map: 5 Cumulative CPU: 85.06 sec HDFS Read: 979418229 HDFS Write: 664486 SUCCESS
5:46:26.347 PM INFO org.apache.hadoop.hive.ql.Driver
Total MapReduce CPU Time Spent: 1 minutes 25 seconds 60 msec
5:46:26.347 PM INFO org.apache.hadoop.hive.ql.Driver
OK
5:46:26.348 PM INFO org.apache.hadoop.hive.ql.Driver
<PERFLOG method=releaseLocks>
5:46:26.365 PM INFO org.apache.hadoop.hive.ql.Driver
</PERFLOG method=releaseLocks start=1392936386348 end=1392936386365 duration=17>
5:46:26.365 PM INFO org.apache.hadoop.hive.ql.Driver
</PERFLOG method=Driver.run start=1392936353955 end=1392936386365 duration=32410>
5:46:26.365 PM INFO org.apache.hive.service.cli.CLIService
SessionHandle [2a370498-ee64-43c2-9add-284f2e61b53b]: executeStatement()
5:46:26.365 PM INFO org.apache.hive.service.cli.CLIService
OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=1be1c7de-1521-4c10-b393-f39457fb2603]: getOperationStatus()
5:46:26.418 PM WARN org.apache.hadoop.hive.conf.HiveConf
DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.
5:46:26.420 PM INFO org.apache.hive.service.cli.CLIService
OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=1be1c7de-1521-4c10-b393-f39457fb2603]: getResultSetMetadata()
5:46:26.498 PM WARN org.apache.hadoop.hive.conf.HiveConf
DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.
5:46:26.556 PM WARN org.apache.hadoop.hive.conf.HiveConf
DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.
5:46:26.565 PM INFO org.apache.hadoop.mapred.FileInputFormat
Total input paths to process : 5
5:46:26.713 PM WARN org.apache.hadoop.hive.conf.HiveConf
DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.
5:46:26.780 PM INFO org.apache.hive.service.cli.CLIService
OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=1be1c7de-1521-4c10-b393-f39457fb2603]: fetchResults()
5:46:26.920 PM WARN org.apache.hadoop.hive.conf.HiveConf
DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.
5:46:26.987 PM INFO org.apache.hive.service.cli.CLIService
OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=1be1c7de-1521-4c10-b393-f39457fb2603]: fetchResults()
...
(Lot more fetchresults)
...
...
Attached Images

Test send email

$
0
0
Hi all,
i try to send an email with Pentaho spoon.
i followed steps here http://wiki.pentaho.com/display/EAI/Mail_transformation

So i open a new Transformation and drop Mailm then fill all necessary fields fro sending an email with attachment.
When i launch, there is no error message and the mail is not sent.

It happens nothing.
Is there a step i forgot?

Thank you

Question: how to order DESC/ASC through the parameter

$
0
0
In my listing report with PRD, I am looking to add a parameter ${p_order_on} with two options: ASC or DESC, i.e. (in MySQL)
===
SELECT ....
....
FROM mydb.mytable
ORDER BY ${p_order_by} ${p_order_on}
===

Where I can use an integer to set up the parameter ${p_order_by} which corresponding to the column number, so I can sort by any columns without problem. But ${p_order_on} is either 'ASC' or 'DESC' which will be interpolated as a string in the SQL and yield SQL error, i.e. the following

ORDER BY 2 "DESC" ---> this should be ORDER BY 2 DESC (without the quotation marks around DESC)

I tried setting up 'Value Type' of the parameter ${p_order_on} to either 'String' or 'Object'. neither is working. Adding the ${p_order_on} in Query Scripting is also not working (from my understanding, the query script can only change the text in design-time, not run-time). the parameter in SQL sounds to be a placeholder and always quoted by Pentaho. Is there a good way that I can remove the quotation marks around 'DESC' at run-time. so that I can control the sorting by ascending or descending orders.

Many thanks in advance,

Web Designing Online Training in Hyderabad

$
0
0
Web Designing Online Training by Webdesigningonlintraining We are providing excellent Web Designing Training by
real-time IT industry experts Our training methodology is very unique Our Course Content covers all the in-depth
critical scenarios. We have completed more than 100 Web Designing batches through Online Web Designing
Training program, Our Web Designing Classes covers all the real time scenarios, and its completely on Hands-on for each
and every session.


Contact Number : India :+91 (0) 8897931177,

Email : webdesignonlinetrainings@gmail.com ,

Web: http://webdesigningonlinetraining.com/

Colors x Legend - readers x options

$
0
0
hello everyone! so, I tried asking this one on the IRC channel and got not reply, so I'm trying here...

[11:52] <joaociocca> if I set reader to use color from datasource, I can get everything fine - but legend will show color code instead of the series used.
[11:53] <joaociocca> if I remove color code from datasource, put color in "colors" advanced property of chart, chart reads series both as series and as colors - instead of using the colors set in advanced properties
[11:54] <joaociocca> and... using multichart I can only have one legend, not individual legends for each chart?
[11:57] <joaociocca> situation 1 - http://screenpresso.com/=VgWhb - datasource feeds color field, legend shows color codes.



[11:58] <joaociocca> situation 2- http://screenpresso.com/=PYMzd - datasource doesn't feeds colors, colors are set in advanced properties, color reader has been removed, series is being read as both series AND colors.

Problema con Pentaho en el servidor

$
0
0
Hola, actualmente tengo instalado Pentaho Bi server 5.1 en un servidor con Windows server 2012, ya lo arranque y funciona sin problemas. el inconveniente es que no puedo lograr que cualquier equipo conectado a internet acceda a la aplicacion con la IP del servidor a traves del puerto 8080. Ayuda porfavor :(

Metadata Editor Database connection

$
0
0
how to use variable or session variable In pentaho metadata editor database connection , so that i can change database connection at the domain that published at BI-server according to bi server user.

Process each row

$
0
0
How to process each row using PDI?

Suppose if i have 10 lines in flat file. I wanted to process each row to the mail as the body using mail step.

So i have to receive finally 10 mails to the 10 different recepients.. First row has to go to first mail id, second row has to go to second mail id, and so on..

[PME] How to add WHERE filter?

$
0
0
Hi,

I'm wondering if it's possible in PME 5 to add some WHERE conditions in order to filter the subset of data returned by an object from the business view.

For example, I want to add following objects in the business view :
- All Person (SELECT * FROM person)
- Person with age > 25 (SELECT * FROM person WHERE age > 25)

Is this possible to specify this in PME?

Best regards,

Fearless

UUID in postgres

$
0
0
I'm getting data from several tables, that have the PK as UUID.

When I try to get the data and put into a similar table in the destination postgres DB I get an error.

It seems UUID is not supported by pentaho.

Is there a way to accomplish this?

Thanks

Daniel

PostgreSQL Bulk Loader problem

$
0
0
Hi,

I have 700 million rows in a operational DB on Postgres. I have to export this data into a datawarehouse (also Postgres). After reading some websites, the best way for this task in PDI is the Bulk Loader component.

My flow in PDI :

Input table --> PostgresSQL Bulk Loader

PDI is installed on my computer. To setup the PostgresSQL Bulk Loader component, for the "psql path" I put the path of the psql utility of my localhost Postgre (something like C:\postgres\psql.exe) because I don't know the path of the psql on the operational DB and the datawarehouse.

But it doesn't work. I have errors in PDI with PostgresSQL Bulk Loader component. When I check the log, PDI and the psql utility has succefully connected to the datawarehouse but I think I have a problem wth the statement COPY...FROM STDIN... generated behind PDI.

EDIT : this the error message :

Quote:

PostgreSQL Bulk Loader.0 - Executing command: "C:\Program Files\PostgreSQL\9.3\scripts\runpsql.bat" -U i2bi -h pgsqldbm.server_name.biz -p 5432 Source
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - Launching command: COPY "public".src_gridfeeinvoice ( gridfeeinvoiceid, supplier, gridoperator, invoicenature, gsrn, fromdate, todate, invoicedpointnature, invoicenumber, deliveryperiodid, invoicingdate, messageid, creator, creationdate, latestmodifier, latestmodifdate ) FROM STDIN WITH CSV DELIMITER AS ';' QUOTE AS '"';
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « gridfeeinvoiceid, » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « supplier, » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « gridoperator, » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « invoicenature, » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « gsrn, » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « fromdate, » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « todate, » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « invoicedpointnature, » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « invoicenumber, » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « deliveryperiodid, » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « invoicingdate, » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « messageid, » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « creator, » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « creationdate, » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « latestmodifier, » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « latestmodifdate » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « ) » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « FROM » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « STDIN » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « WITH » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « CSV » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « DELIMITER » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « AS » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « ';' » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « QUOTE » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « AS » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « ''; -U RLY;0082014062;;2010/06/30 » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « 00:00:00;5398873;system;;system; » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « -d » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « 398852;system;;system; » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « -p » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « 010/06/30 » ignorée
2014/02/21 17:14:02 - PostgreSQL Bulk Loader.0 - ERROR {0} psql : attention : option supplémentaire « 00:00:00;5398861;system;;system; » ignorée
2014/02/21 17:14:04 - PostgreSQL Bulk Loader.0 - ERROR {0} psql: n'a pas pu traduire le nom d'hôte « COPY » en adresse : Unknown server error
2014/02/21 17:14:04 - PostgreSQL Bulk Loader.0 - OUTPUT {0} Server [localhost]: Database [postgres]: Port [5432]: Username [postgres]: Appuyez sur une touche pour continuer...
2014/02/21 17:14:04 - PostgreSQL Bulk Loader.0 - ERROR (version 5.0.1-stable, build 1 from 2013-11-15_16-08-58 by buildguy) : Error in step
2014/02/21 17:14:04 - PostgreSQL Bulk Loader.0 - ERROR (version 5.0.1-stable, build 1 from 2013-11-15_16-08-58 by buildguy) : org.pentaho.di.core.exception.KettleException:
2014/02/21 17:14:04 - PostgreSQL Bulk Loader.0 - Error serializing rows of data to the psql command
2014/02/21 17:14:04 - PostgreSQL Bulk Loader.0 - Le canal de communication est sur le point d’être fermé
2014/02/21 17:14:04 - PostgreSQL Bulk Loader.0 -
2014/02/21 17:14:04 - PostgreSQL Bulk Loader.0 - at org.pentaho.di.trans.steps.pgbulkloader.PGBulkLoader.writeRowToPostgres(PGBulkLoader.java:450)
2014/02/21 17:14:04 - PostgreSQL Bulk Loader.0 - at org.pentaho.di.trans.steps.pgbulkloader.PGBulkLoader.processRow(PGBulkLoader.java:316)
2014/02/21 17:14:04 - PostgreSQL Bulk Loader.0 - at org.pentaho.di.trans.step.RunThread.run(RunThread.java:60)
2014/02/21 17:14:04 - PostgreSQL Bulk Loader.0 - at java.lang.Thread.run(Unknown Source)
2014/02/21 17:14:04 - PostgreSQL Bulk Loader.0 - Caused by: java.io.IOException: Le canal de communication est sur le point d’être fermé
2014/02/21 17:14:04 - PostgreSQL Bulk Loader.0 - at java.io.FileOutputStream.writeBytes(Native Method)
2014/02/21 17:14:04 - PostgreSQL Bulk Loader.0 - at java.io.FileOutputStream.write(Unknown Source)
2014/02/21 17:14:04 - PostgreSQL Bulk Loader.0 - at java.io.BufferedOutputStream.flushBuffer(Unknown Source)
2014/02/21 17:14:04 - PostgreSQL Bulk Loader.0 - at java.io.BufferedOutputStream.write(Unknown Source)
2014/02/21 17:14:04 - PostgreSQL Bulk Loader.0 - at java.io.FilterOutputStream.write(Unknown Source)
2014/02/21 17:14:04 - PostgreSQL Bulk Loader.0 - at org.pentaho.di.trans.steps.pgbulkloader.PGBulkLoader.writeRowToPostgres(PGBulkLoader.java:361)
2014/02/21 17:14:04 - PostgreSQL Bulk Loader.0 - ... 3 more
2014/02/21 17:14:04 - PostgreSQL Bulk Loader.0 - Finished processing (I=0, O=162, R=163, W=162, U=0, E=1)
2014/02/21 17:14:04 - gridfeeinvoice - gridfeeinvoice
2014/02/21 17:14:04 - gridfeeinvoice - gridfeeinvoice
2014/02/21 17:14:04 - Table input.0 - ERROR (version 5.0.1-stable, build 1 from 2013-11-15_16-08-58 by buildguy) : Unexpected error
2014/02/21 17:14:04 - Table input.0 - ERROR (version 5.0.1-stable, build 1 from 2013-11-15_16-08-58 by buildguy) : org.pentaho.di.core.exception.KettleDatabaseException:
2014/02/21 17:14:04 - Table input.0 - Couldn't get row from result set
2014/02/21 17:14:04 - Table input.0 - ERROR: canceling statement due to user request
2014/02/21 17:14:04 - Table input.0 -
2014/02/21 17:14:04 - Table input.0 - at org.pentaho.di.core.database.Database.getRow(Database.java:2302)
2014/02/21 17:14:04 - Table input.0 - at org.pentaho.di.core.database.Database.getRow(Database.java:2270)
2014/02/21 17:14:04 - Table input.0 - at org.pentaho.di.trans.steps.tableinput.TableInput.processRow(TableInput.java:153)
2014/02/21 17:14:04 - Table input.0 - at org.pentaho.di.trans.step.RunThread.run(RunThread.java:60)
2014/02/21 17:14:04 - Table input.0 - at java.lang.Thread.run(Unknown Source)
2014/02/21 17:14:04 - Table input.0 - Caused by: org.postgresql.util.PSQLException: ERROR: canceling statement due to user request
2014/02/21 17:14:04 - Table input.0 - at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2101)
2014/02/21 17:14:04 - Table input.0 - at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1834)
2014/02/21 17:14:04 - Table input.0 - at org.postgresql.core.v3.QueryExecutorImpl.fetch(QueryExecutorImpl.java:2036)
2014/02/21 17:14:04 - Table input.0 - at org.postgresql.jdbc2.AbstractJdbc2ResultSet.next(AbstractJdbc2ResultSet.java:1821)
2014/02/21 17:14:04 - Table input.0 - at org.pentaho.di.core.database.Database.getRow(Database.java:2290)
2014/02/21 17:14:04 - Table input.0 - ... 4 more
2014/02/21 17:14:04 - Table input.0 - Finished reading query, closing connection.
2014/02/21 17:14:04 - Table input.0 - Finished processing (I=20000, O=0, R=0, W=19999, U=0, E=1)
2014/02/21 17:14:04 - gridfeeinvoice - ERROR (version 5.0.1-stable, build 1 from 2013-11-15_16-08-58 by buildguy) : Errors detected!
2014/02/21 17:14:04 - Spoon - The transformation has finished!!
2014/02/21 17:14:04 - gridfeeinvoice - ERROR (version 5.0.1-stable, build 1 from 2013-11-15_16-08-58 by buildguy) : Errors detected!
2014/02/21 17:14:04 - gridfeeinvoice - ERROR (version 5.0.1-stable, build 1 from 2013-11-15_16-08-58 by buildguy) : Errors detected!
2014/02/21 17:14:04 - gridfeeinvoice - gridfeeinvoice
2014/02/21 17:14:04 - gridfeeinvoice - gridfeeinvoice
Thank you.

Pentaho reports basic questions

$
0
0
Hi Guys..
I just joined pentaho and want to congrats team for such a wonderful product.
I have a question regarding sending reports on email-ids.
Can we send chart directly on email itself instead of pdf .
Can we send report a list of people(using csv import)

Please suggest some tutorials for beginners.

How to use formula metadata editor

$
0
0
I want to variable in metadata formula if any one know please suggest me the way

facing trouble launching spoon.sh on amazon ec2 linux

$
0
0
I am quit new to linux and Amazon EC2.
i configure JAVA_HOME by following below two link
How to know JAVA_HOME_Variable
bash_profile
so current path in my bash_profile are
export JAVA_HOME=/usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/java

export PATH=$PATH:/usr/lib/jvm/jre-1.6.0-openjdk.x86_64/binnow i am trying to launch ./spoon.sh it is giving me error of
Caused by: java.lang.UnsatisfiedLinkError: Could not load SWT library. Reasons:

no swt-pi-gtk-4234 in java.library.path

no swt-pi-gtk in java.library.path

/home/nifty/.swt/lib/linux/x86/libswt-pi-gtk-4234.so: libgtk-x11-2.0.so.0: cannot open shared object file: No such file or directory

Can't load library: /home/nifty/.swt/lib/linux/x86/libswt-pi-gtk.soso can somebody suggest that what is wrong?

URL for Toromiro, FileVault or other JCR browsers?

$
0
0
Pentaho 5 is great, but we'd really like to be able to access the contents of "pentaho-solutions" from outside Pentaho, to be able to take advantage of JCR versioning and other goodies.

FileVault looks really nice, and Toromiro would be useful for debugging, but as far as we know they both need a URL to the JCR API.

Does Pentaho offer the usual Jackrabbit service API somewhere? We've looked around and we could only find the /api/repository/... API, which does not seem to be the API that Toromiro or FileVault are looking for.

difference ce-5.0.1 / ce-5.0.1.A

Can pentaho reporting do bursting...?

$
0
0
Hi guys,Can we do report bursting in pentaho ? I am very confuse here guys, some are saying it is possible some saying it is only possible in enterprise version.please someone clear my doubt.i am using pentaho 5.0.1.ThanksPrabal

IndexOutOfBoundsException when using a parent-child hierarchy

$
0
0
Hi all,

I am using mondrian 3.6.1 as a basic XMLA server and I get de following error when I send a MDSCHEMA_HIERARCHIES discover request. The same error when I use WebPivotTable or other XMLA client to access cube data.

Code:

mondrian.xmla.XmlaException: Mondrian Error:XMLA Discover unparse results error
    at mondrian.xmla.XmlaHandler.discover(XmlaHandler.java:2872)
    at mondrian.xmla.XmlaHandler.process(XmlaHandler.java:670)
    at mondrian.xmla.impl.DefaultXmlaServlet.handleSoapBody(DefaultXmlaServlet.java:506)
    at mondrian.xmla.XmlaServlet.doPost(XmlaServlet.java:317)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:641)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:722)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:224)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
    at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
    at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:927)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
    at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:987)
    at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:579)
    at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:309)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
    at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
    at java.util.ArrayList.rangeCheck(ArrayList.java:604)
    at java.util.ArrayList.get(ArrayList.java:382)
    at mondrian.olap4j.MondrianOlap4jExtra.isHierarchyParentChild(MondrianOlap4jExtra.java:183)
    at mondrian.xmla.RowsetDefinition$MdschemaHierarchiesRowset.populateHierarchy(RowsetDefinition.java:4568)
    at mondrian.xmla.RowsetDefinition$MdschemaHierarchiesRowset.populateDimension(RowsetDefinition.java:4490)
    at mondrian.xmla.RowsetDefinition$MdschemaHierarchiesRowset.populateCube(RowsetDefinition.java:4470)
    at mondrian.xmla.RowsetDefinition$MdschemaHierarchiesRowset.populateCatalog(RowsetDefinition.java:4452)
    at mondrian.xmla.RowsetDefinition$MdschemaHierarchiesRowset.populateImpl(RowsetDefinition.java:4440)
    at mondrian.xmla.Rowset.populate(Rowset.java:221)
    at mondrian.xmla.Rowset.unparse(Rowset.java:193)
    at mondrian.xmla.XmlaHandler.discover(XmlaHandler.java:2866)
    ... 21 more

I realized that the error appears when I use a parent-child hierarchy. This is the example schema (I have followed the "writing a schema" section at mondrian documentation but using my own example database):

Code:

<?xml version="1.0" encoding="iso-8859-1"?>
<Schema name="esquemavalor">
    <Cube name="valor">
        <Table name="valor"/>
        <Dimension name="persona" foreignKey="idnodopersona">
            <Hierarchy hasAll="true" allMemberName="todasLasPersonas" primaryKey="idnodopersona">
                <Table name="nodopersona"/>
                <Level name="persona" uniqueMembers="true" type="Numeric" column="idnodopersona" nameColumn="nombre" parentColumn="idnodopersonapadre" nullParentValue="0">
                    <Closure parentColumn="idnodopersonaancestro" childColumn="idnodopersona">
                        <Table name="cierrenodopersona"/>
                    </Closure>
                    <Property name="codigo" column="codigo"/>
                </Level>
            </Hierarchy>
        </Dimension>
        <Dimension name="oficina" foreignKey="idoficina">
            <Hierarchy hasAll="true" allMemberName="todasLasOficinas" primaryKey="idoficina">
                <Table name="oficina"/>
                <Level name="oficina" column="idoficina" uniqueMembers="true" nameColumn="nombre"/>
            </Hierarchy>
        </Dimension>
        <Dimension name="fecha" foreignKey="idfecha" type="TimeDimension">
            <Hierarchy hasAll="true" allMemberName="todasLasFechas" primaryKey="idfecha">
                <Table name="fecha"/>
                <Level name="Anio" column="anio" uniqueMembers="true" levelType="TimeYears" type="Numeric"/>
                <Level name="Mes" column="mes" uniqueMembers="false" levelType="TimeMonths" type="Numeric"/>
                <Level name="Dia" column="dia" uniqueMembers="false" levelType="TimeDays" type="Numeric"/>
            </Hierarchy>
        </Dimension>
        <Measure name="Ventas" column="ventas" aggregator="sum"/>
        <Measure name="Gastos" column="gastos" aggregator="sum"/>
    </Cube>
</Schema>

The closure table ("cierrenodopersona" in my case) is filled as documentation says. I have tested it in both mondrian 3.5.0 and 3.6.1 versions, with the same results: the IndexOutOfBoundsException at the same method.


Any help would be appreciated. Thanks in advance.

Can we use Distribute rows to improve performance of data transfer ?

$
0
0
Hi,

I created one transformation in which I am inserting data from one Oracle table to MySQL,

My steps are

Table input --> Insert/Update ,

but now when I am changing same transformation by adding multiple Insert/Update with only one Table input ,
and giving Distribute rows, I am getting better performance .

Is there any problems by using steps like this ?

can we create dashboards with community edition of Pentaho (BI Version 4.5)

$
0
0
Hi All,

I am new to Pentaho, need to create drill down dashboards with Pentaho-BI Community Edition, can we do this with Pentaho (for BI Version 4.5 CE) ?

Regards,
S.A. Mateen
Viewing all 16689 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>