Quantcast
Channel: Pentaho Community Forums
Viewing all 16689 articles
Browse latest View live

How to access the dashboard plugin-samples?

$
0
0
I have CE 5.4 on windows, and in pentaho-solutions/ there is a directory plugin-samples which appears to have some good starting points for dashboards.
The problem I have is I don't know how to use them ... when I browse in the web console, I can't see them, and I don't think they are coming up as templates available to me when I create a new dashboard, because the templates are just layouts with no elements.

What am I missing?

Pentaho JackrabbitR: ERROR [RepositoryImpl] Failed to initialize workspace 'default'

$
0
0
Hi,

I am using Pentaho biserver-ce 5, I have recently encountered an error described in the following post (I used google translate since the article is in Spanish):

http://www.fernandoaparicio.net/pent...do-inesperado/

Basically the recommendation is to delete the index of the JCR repo and I did that but now have another error with regards to hibuser and pentaho_user not being found.

Please see attachment (catalina.txt) for the stack trace. When trying to login the user console a Pentaho Initialization Exception occurs.

How do I fix this?

Thanks in advance.
Attached Files

Error with Jackrabbit DB while trying to configure Pentaho CE 6.1 with mysql

$
0
0
Hi all,
I'm newbie here, and I have installed Pentaho CE BIServer (6.1.0.1-196) on my PC (Ubuntu 14.04). Now I'm trying to use a mysql database for Pentaho repositories, so I'm following mainly these instructions, just ignoring the part about datamarket:https://help.pentaho.com/Documentati...F0/0P0/030/020
I've been able to configure the quartz and hibernate databases, but I haven't been able to configure de jackrabbit repository. When I change the repository.xml file and start the biserver, I'm getting this error:

Code:

Caused by: org.pentaho.platform.api.engine.PentahoSystemException: PentahoSystem.ERROR_0014 - Error while trying to execute startup sequence for org.pentaho.platform.engine.services.connection.datasource.dbcp.DynamicallyPooledDatasourceSystemListener
    at org.pentaho.platform.engine.core.system.PentahoSystem$2.call(PentahoSystem.java:455)
    at org.pentaho.platform.engine.core.system.PentahoSystem$2.call(PentahoSystem.java:437)
    at org.pentaho.platform.engine.core.system.PentahoSystem.runAsSystem(PentahoSystem.java:416)
    at org.pentaho.platform.engine.core.system.PentahoSystem.notifySystemListenersOfStartup(PentahoSystem.java:437)
    at org.pentaho.platform.engine.core.system.PentahoSystem.access$000(PentahoSystem.java:89)
    at org.pentaho.platform.engine.core.system.PentahoSystem$1.call(PentahoSystem.java:368)
    at org.pentaho.platform.engine.core.system.PentahoSystem$1.call(PentahoSystem.java:365)
    at org.pentaho.platform.engine.core.system.PentahoSystem.runAsSystem(PentahoSystem.java:416)
    at org.pentaho.platform.engine.core.system.PentahoSystem.notifySystemListenersOfStartup(PentahoSystem.java:365)
    ... 16 more
Caused by: java.lang.NullPointerException
    at org.pentaho.platform.repository.JcrBackedDatasourceMgmtService.init(JcrBackedDatasourceMgmtService.java:67)
    at org.pentaho.platform.engine.core.system.objfac.AbstractSpringPentahoObjectFactory.retreiveObject(AbstractSpringPentahoObjectFactory.java:266)
    at org.pentaho.platform.engine.core.system.objfac.AbstractSpringPentahoObjectFactory.get(AbstractSpringPentahoObjectFactory.java:82)
    at org.pentaho.platform.engine.core.system.objfac.AggregateObjectFactory.get(AggregateObjectFactory.java:273)
    at org.pentaho.platform.engine.core.system.objfac.AggregateObjectFactory.get(AggregateObjectFactory.java:137)
    at org.pentaho.platform.engine.services.connection.datasource.dbcp.NonPooledDatasourceSystemListener.getListOfDatabaseConnections(NonPooledDatasourceSystemListener.java:136)
    at org.pentaho.platform.engine.services.connection.datasource.dbcp.NonPooledDatasourceSystemListener.startup(NonPooledDatasourceSystemListener.java:53)
    at org.pentaho.platform.engine.core.system.PentahoSystem$2.call(PentahoSystem.java:446)
    ... 24 more

I've been searching the forum and google trying to find a clue, but I haven't found anything that works. I've commented the startup of the HSQLDB dabatabases in <BISERVER_HOME>/tomcat/webapps/pentaho/WEB-INF/web.xml that wasn't mentioned in the Pentaho Documentation (although that seems to apply to hibernate and quartz databases). I've also deleted these caches:
<BISERVER_HOME>\tomcat\work\Catalina\*
<BISERVER_HOME>\tomcat\conf\Catalina\localhost\pentaho.xml (This file doesn't exist in my instalation)
<BISERVER_HOME>\tomcat\temp\*
<BISERVER_HOME>\pentaho-solutions\system\osgi\cache\ (This directory doesn't exist in my installation)
<BISERVER_HOME>\pentaho-solutions\system\jackrabbit\repository

I've checked the database connection, the database is available and I'm able to connect.

I'm attaching the repository.xml and catalina.out files, in case someone is able to see what I'm doing wrong.
Regards
Attached Files

Installation Pentaho Enterprise Version via terminal

$
0
0
Hi.

I would like to install Pentaho Enterprise via terminal on a server. For the OS, I didn't install a desktop. I have downloaded the "pentaho".bin file. When I launched the bin file, it needs however a desktop in order to follow the installation steps.

I would like to know if there 's a way where I can install the Pentaho without using a desktop? like only using a command in terminal?

Thanks in advance.

pika

Comdining by appending 2 or more csv file into another

$
0
0
I have been trying to combine 2 csv files into a master file, containing sku, quantity and status as following:

file1.csv, contains 12 columns and 500 records.
Using: Scripting>Formula, I am pulling sku, quantity and status.
Using: Output>Microsoft Excel Writer, I am writing the 3 columns (sku, quantity and status) to a master.xls

file2.csv, contains 5 columns and 750 records.
Using: Scripting>Formula, I am pulling sku, quantity and status.
Using: Output>Microsoft Excel Writer, I am writing the 3 columns (sku, quantity and status) to a master.xls

both of Output>Microsoft Excel Writer have the following setting.
How can I combine the CSVs into one excel files?

Capture1.jpg

Capture2.jpg
Attached Images

Attribute Selection

$
0
0
Hello,

I have some different datasets, mainly composed each of around 1Million numerical attributes, a numerical class and 200 instances.
My objective is to filter irrelevant or redundant instances for each different set so as to obtain a smaller number of instances correlated to the class to build afterwards causal graphs/trees.
For the first step (the filtering out) i wanted to use the correlation based feature selection algorithm. And as i guess the memory will not be enough, i'm using the linux command line.

I'm running the following command at the moment:
java -Xmx1024M -cp /software/weka-3.6.12/weka.jar weka.attributeSelection.CfsSubsetEval -s "weka.attributeSelection.BestFirst -D 1 -N 5 -Z " -i <the arff input file>

However i have three questions:

1. I would like to obtain some correlation coefficient to the class from the remaining attributes after the filtering out. That's why in the command line i included the -z option. However i'm not sure if the output that i obtain is the one i want. I get some Group:... and Merit:... lines, that i don't really understand what either of them means. What does it means? and How i could calculate the correlation coefficient i need from that output?

2. I'm starting to be a bit confused...because i just read that there is also an attribute selection filter. So i tried both commands but i'm not sure what is the difference between both...and which one is the one i should use for my problem:

the same as in the beginning of this message:
java -Xmx1024M -cp /software/weka-3.6.12/weka.jar weka.attributeSelection.CfsSubsetEval -s "weka.attributeSelection.BestFirst -D 1 -N 5 -Z " -i <the arff input file>

or?
java -Xmx1024M -cp /software/weka-3.6.12/weka.jar weka.filters.supervised.attribute.AttributeSelection -S "weka.attributeSelection.BestFirst -D 1 -N 5 -Z " -E "weka.attributeSelection.CfsSubsetEval" -i <the arff input file> -o <the arff output file>

3. With small arff input files, like the ones given as examples with weka, i manage to obtain some result. However, my input files have sizes of around 2GB so i don't manage to avoid the job to crash, getting: "Exception in thread "main" java.lang.OutOfMemoryError: Java heap space". I already tried to increase the memory size with -Xms1024m and -Xmx1024m... but still crashing... even with a smaller input file, that has a size of 700MB, it still crashes. I looked on my machine, and the Initial Heap Size=756MB and the Maximum Heap Size= 12100MB. So even adapting to this values, the job crashes...
How could I run the attribute selection with such big files?

Thank you so much in advance for your answer,
Ana Castro

Pentaho EE User LogIn not Functionning

$
0
0
Hi.

I am using Pentaho Enterprise Edition.I wanted to access the user console.

I started the BA server by using
./ctlscript.sh start baserver

When I reached the login page (http://localhost:8080/pentaho/Login), I was asked to fill in the username and password. However, username=admin and password= password doesn't allow me to log in. It shows that it's a login problem.

I tried looking for an answer in this community forum, but couldn't find an answer. Can anyone help?

Thanks in advance.

pikaCHEW

Comparing two streams and returning unmatched values?

$
0
0
Hi all,

I have what is hopefully a pretty straightforward task. I'm still very new to Pentaho. I have two queries set up right now, one that pulls the unique records in one field on a table, and another that pulls unique records from another field on a different table. What I need to do is compare the two lists and return the ones that appear on one but not the other. Perhaps a more clear way of explaining it - if this were Excel, I'd want to do a VLOOKUP on each value in one of the query results, and then filter so I can see which ones appeared on the first query but not the second. I've tried a Stream Lookup but it didn't seem to do what I was looking for. Can anyone provide some assistance?

Making a DB connection different for different environment(eg. /test/prod) errors out

$
0
0
Hi Everyone,

To make a db connection using parameter which can be used in different environment, I used the exact steps posted in below link.

http://wiki.pentaho.com/display/EAI/...oduction%29%3F


I edited kettle.properties and added a line for DB_HOSTNAME. Please find below screenshot for reference.

kettle_properties.jpg

I opened the shared.xml and I could see the server information has 'DB_HOSTNAME' as value. Please find below screenshot for reference.

shared_xml.jpg

When I run the transformation, I am getting following error. Could you please help.

dynamic_db_connection_error.jpg


Thanks,
Raji.
Attached Images

Filtering specifics and whole values

$
0
0
Hello guys.

I have a issue that I can't solve with my poor knowledge in PDI.

So, I have a sheet with some values that I need to import, but I need to filter specifics and whole values:

filter.jpg

For example in the image above, the rows filled in blank weren't filtered, the rows filled in yellow were filtered and were expected but the row filled in blue was filtered but wasn't expected.

I used a Filter Rows Step and using the function "CONTAINS", was filtered the unexpected value "SPARTAN" because of the "SPA":

filter2.jpg

For default the content that I need has 3 characteres (BMA, CAC, CPQ, CGO, LAG, REG, RJA, SPA, VIX).

Can someone help me with another solution?

Sorry for my english. :p
Attached Images

Oracle wallet ?

$
0
0
Is there an example of how to use a connection string, instead of a username/password, when connecting to an Oracle db in a Kettle transformation connection setup ?

PDI and Windows10

$
0
0
Hi,

I have just upgraded to Windows 10 (given the push by MS) and now PDI does not work correctly.I cannot join components.

Is there a fix on the way or a recommendation (other than going back to Windows 7/8)?

It seems there is a JIRA open but with no ETA.

Thanks,

Dashboard does not show charts when browser cache cleared

$
0
0
Hi,
I have a Pentaho Dashboard with multiple charts on it.
The dashboard does not show the charts every time I clear the browser cache.However when I refresh the page,then I can see all charts.
I checked the cause for it.I found that .js files are not getting loaded for the first time after cache is cleared.
JS files are
1)scripts-bootstrap.js?version=03855771d41198264aa9e2bd11490144
2)CDF.js?v=119ecb029dcff48f759df0ae07b751b7

How can I solve this issue?
Please help!!!

Thank You,
Ajinkya

How can I make pentaho workspace in linux?????

$
0
0
I want to know ..

how can I make workspace in linux.

so, I want to bring my xlsx data or transformations

just write my file name.

Supported cassandra version

$
0
0
Can you someone tell me the version of Cassandra supported by Pentaho?
I'm trying to use Cassandra version 2.6 as Cassandra Input in Pentaho 6.1

Star model plugin in pentaho data integration

$
0
0
Hi,

There was a star model plugin in PDI in one of the 4 series version.
I downloaded it and installed in the latest version i.e PDI 6.0. It gives the following error.

org.pentaho.metadata.model.LogicalRelationship.(Lorg/pentaho/metadata/model/LogicalTable;Lorg/pentaho/metadata/model/LogicalTable;Lorg/pentaho/metadata/model/LogicalColumn;Lorg/pentaho/metadata/model/LogicalColumn;)V
java.lang.NoSuchMethodError: org.pentaho.metadata.model.LogicalRelationship.(Lorg/pentaho/metadata/model/LogicalTable;Lorg/pentaho/metadata/model/LogicalTable;Lorg/pentaho/metadata/model/LogicalColumn;Lorg/pentaho/metadata/model/LogicalColumn;)V
at org.pentaho.di.starmodeler.StarModelDialog.getRelationshipsFromFact(StarModelDialog.java:591)...

After a while Spoon is crashing down.

Is anyone using star model ? can you please help me out with this..

JNDI Oracle Datasource

$
0
0
So I am trying to setup a JNDI connection for an Oracle database. Currently when I connect to it through SQL Developer I use a TNSName.ora file to do so. Now my question is how do I figure out what drive I need to use and how to format my connection URL? I know the host, port, and service name, I just need to know what driver type I can use.

Code:

Example/type=javax.sql.DataSource
Example/driver=org.oracle.jdbcDriver
Example/user=user
Example/password=pass123
Example/url= jdbc:oracle:thin:@//<hostName>:<portNumber>/serviceName; (if you have oracle service name)

Thanks for the help.

Use Parameters in SQL Query

$
0
0
How can I use parameters I am passing in from a transformation to the Report Template in the SQL query I created in my JDBC connection. I have looked at various examples and cannot get the parameters to work. Can someone please show an example where values being passed to a report can be used in the SQL of a report.

So for example let's say I am passing in a parameter called "Date" into the report, I want to use it in this SQL script:

Code:

SELECT Name, Address, PhoneNum from People
WHERE CreateDate = ${Date}

Is this possible?

Difference between "Percent_correct" and "Weighted_avg_true_positive_rate" metrics

$
0
0
I'm using the Weka Experimenter to evaluate the performance of a number of learning algorithms on a single dataset using 10 times 10-fold-cross-validation. When extracting the results from the experiment I get identical results for both the "Percent_correct" (i.e. accuracy) and the "Weighted_avg_true_positive_rate" (i.e. recall or sensitivity) metric. Even the standard deviation is identical (see attached table).

Is this really correct? Accuracy and recall are calculated totally different so I have a hard time understanding how they can be identical for all 16 algorithms that I evaluate. I have verified these results on different versions of Weka (3.6.14 and 3.7.7) and also directly using my own Java program relying on the Weka API.

The data in the dataset consists of 4 ordinal class variables (see confusion matrix figure for more info).

I hope someone can shed some light on this. Thanks in advance.

/M

Skärmavbild 2016-06-07 kl. 22.12.18.jpgSkärmavbild 2016-06-07 kl. 22.11.58.jpg

Legends in Heat Grid Chart

$
0
0
Hi, I'd like to know if there's a way to add a color legend to a Heat Grid Chart. ​I'm using CDE 5.3.

Thanks for your help!
Viewing all 16689 articles
Browse latest View live


Latest Images