Quantcast
Channel: Pentaho Community Forums
Viewing all 16689 articles
Browse latest View live

Great doubt: pentaho underlying db on PostgreSQL, but source data on Oracle...?

$
0
0
Hi all and many thanks for this forum!

I'm quite new to Pentaho stuff and I'm facing an issue with using a Workbench schema from Saiku Analytics.

I was searching for a help in this forum, but without success unluckily...

This is my environment:
- Linux Red Hat
- Pentaho BI Server CE 5.3
- Schema Workbench CE 3.9
- Saiku Analytics CE 3.1

Note: the Pentaho underlying db users and tables (hibernate, quartz, ...) lay on PostgreSQL, whereas my source data (fact and dimensions) lay on Oracle.

I successfully created a cube through Schema Workbench reading data structure from two schema Oracle (using the oracle connection).
Then I successfully published it to Pentaho using the postgres connection (JNDI).
Finally, after some tricky configurations, I see it in Saiku!

But, unfortunately, when I try to drag/drop tables to make the analysis I receive the following error:

PSQLException: ERROR: relation "..." does not exist Position: ...

In other posts, someone said that's because of uppercase table names in the .xml schema file, but I don't think is that the case.

Instead my doubt is: is it caused by having Pentaho underlying tables on PostgreSQL and cube data on Oracle? I thought so, because the exception seems to be raised by Postgres (PSQLException) not by Oracle, as I would expect! It seems that Saiku is searching for tables in Postgres and not in Oracle.

Can you please help me?

Thank you very much.
alessandro

MDX Query behaving abrupt with degenerate dimension on CDE dashboard

$
0
0
Hi
Following is my query:

WITH
SET [sp] AS
([time.fin].[day].[${parDate}]:[time.fin].[day].[${partoDate}])

SET [factory] AS
{[orga
Following is my query:
WITH
SET [sp] AS
([time.fin].[day].[${parDate}]:[time.fin].[day].[${partoDate}])

SET [factory] AS
{[organization].[org].[Fact1],[organization].[org].[Fact2],[organization].[org].[Fact3]}

MEMBER [btype].[b] AS
AGGREGATE(IIF('${param}'='All',
[btype].[type].members,
[btype].[type].[${param}]
))

SELECT
NON EMPTY {[factory]} ON COLUMNS,
NON EMPTY {[sp]}ON ROWS
FROM [cube1]
WHERE ([btype].[b], [Measures].[qty])
in this , btype is the degenerate dimension. When i execute this query on CDE .. sometimes I get java.lang.nullpointerexception , the behaviour is very random. Often, it gives the result and for default load , it always results positive . But for date range change , I randomly get the exception.
my fact_table structure has 5 normal dimension and 3 degenerates .
meanwhile, I have also observed that if add some more grain to the query , then the exception doesn't appear anymore. But adding that doesn't fullfill my required result

nization].[org].[Fact1],[organization].[org].[Fact2],[organization].[org].[Fact3]}

MEMBER [btype].[b] AS
AGGREGATE(IIF('${param}'='All',
[btype].[type].members,
[btype].[type].[${param}]
))

SELECT
NON EMPTY {[factory]} ON COLUMNS,
NON EMPTY {[sp]}ON ROWS
FROM [cube1]
WHERE ([btype].[b], [Measures].[qty])

in this , btype is the degenerate dimension.
When i execute this query on CDE .. sometimes I get java.lang.nullpointerexception , the behaviour is very very random. Often, it gives the result and for default load , it always results positive . But for date range change , I randomly get the exception.

my fact_table structure has 5 normal dimension and 3 degenerates .

meanwhile, I have also observed that if add some more grain to the query , then the exception doesn't appear anymore. But adding that doesn't fullfill my required result

Extract meta-data information runtime

$
0
0
Hi All,

I want to read the meta-data information from the source database and then based on the columns have to apply filters.
The input is dynamic and can be in any form (table input, xls, csv, etc.)

The problem is that column names are not known. They are determined runtime. :confused:

Can someone help me with how can i extract the meta-data info, parse the info and then based on the resultset, populate the destination db.

Any help is appreciated.

Thanks,
Aparna

[CCC2] represent null values on chart?

$
0
0
Hello,

I am trying to show null values (absence of value) on some line/bar charts. I want to do something similar to this example:
http://www.webdetails.pt/ctools/ccc/...me-series-line

On that chart, not all series have values on all categories. When there is no value, it doesn't appear on chart. Just what I need ;D !

I can get that behaviour by just removing on chart datasource those entries with null values, as long as another series has value for the category I am removing.

The problem I am facing is:
- if every series values for a given category are null, that category isnt showed anymore. I need all categories to show on axis, just with no value on chart.
- Bringing this to its last consequence, if all values for all series/categories are null, chart isn't showing anymore. I would like to see chart's axis at least :S

¿have any of you encountered this problem previously? Any tips will be highly appreciated.

Until this day, I am casting null values to 0, but as you all probably have already encounter previously, when working with data, having a null (=empty, absence of value) isn't the same as having a 0 value for a certain measurement.

Best Regards.

Embedding CDE Dashboard in 5.3

$
0
0
I tried to follow Pedro Alves blog http://pedroalves-bi.blogspot.com/20...taho-with.html on embedding CDE dashboards in your own page. However, these instructions seem to be related to 5.0 version of pentaho, and not 5.3.

Can anyone point me to a more up to date resource on the subject? I'm particularly interested in the Div Integratoin method as opposed to the Iframe one.

Cheers.

Aggregation Tables Not Working

$
0
0
I've been struggling getting aggregation tables to work. Here is what my fact table looks like:

Code:

employment_date_id
dimension1_id
dimension2_id
dimension3_id
dimension4
dimension5
measure1
measure2
measure3



I'm collapsing the employment_date_id from year, quarter, and month to include just the year, but every other column is included as is in the fact table. This is what my aggregation table looks like:

Code:

yearquartermonth_year
dimension1_id
dimension2_id
dimension3_id
dimension4
dimension5
measure1
measure2
measure3
fact_count

I'm only collapsing the year portion of the date. The remaining fields are left as is. Here is my configuration:

Code:

<AggFactCount column="FACT_COUNT"/>
<AggForeignKey factColumn="dimension1_id" aggColumn="dimension1_id"/>
<AggForeignKey factColumn="dimension2_id" aggColumn="dimension2_id"/>
<AggForeignKey factColumn="dimension3_id" aggColumn="dimension3_id"/>

<AggMeasure name="[Measures].[measure1]" column="measure1"/>
<AggMeasure name="[Measures].[measure2]" column="measure2"/>
<AggMeasure name="[Measures].[measure3]" column="measure3"/>

<AggLevel name="[dimension4].[dimension4]" column="dimension4"/>
<AggLevel name="[dimension5].[dimension5]" column="dimension5"/>
<AggLevel name="[EmploymentDate.yearQuarterMonth].[Year]" column="yearquartermonth_year"/>

I'm for the most part copying the 2nd example of aggregation tables from the documentation. Most of my columns are not collapsed into the table and are foreign keys to the dimension tables.
My query I'm trying to execute is something like:

Code:

select {[Measures].[measure1]} on COLUMNS, {[EmploymentDate.yearQuarterMonth].[Year]} on ROWS from Cube1
The problem is that when I debug it and turn on the logging I see bit keys that look like this:

Code:

AggStar:agg_year_employment
bk=0x00000000000000000000000000000000000000000000000111111111101111100000000000000000000000000000000000000000000000000000000000000000
fbk=0x00000000000000000000000000000000000000000000000000000001101111100000000000000000000000000000000000000000000000000000000000000000
mbk=0x00000000000000000000000000000000000000000000000111111110000000000000000000000000000000000000000000000000000000000000000000000000

And my query's bit pattern is:

Code:

Foreign columns bit key=0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001
Measure bit key=        0x00000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000000000000000

And so my aggregation table is skipped. However, these are the exact columns that are folded into the table. But the bit positions are off between the query's and the aggregation table's. The other thing I find strange is that a portion of the columns are collapsed into the table, but all AggForeignKeys aren't included as bits so if I make a query with those columns this aggregation table will get skipped? That's counter to what I had planned. My plan was as long as you are making a query on year boundaries use this aggregation table.

I don't understand why this isn't working and why it fails to build the bit keys properly. I've tried debugging mondrian code, but figuring out which column maps to which position in the bit keys is not obvious. I feel like this shouldn't be this hard, but everything out there doesn't really explain this very well. And this aggregation table architecture is really to break.
What am I doing wrong? And why doesn't my solution work?

BTable in IFrame takes forever to load and never displays

$
0
0
Hi,

I've created a dashboard in 5.3, which includes charts and a btable at the bottom. When I view the dashboard inside pentaho, the BTable shows up quite nicely without issues. However when I embed the dashboard in my application via an IFrame, a the charts show up, but the BTable never does, with a spinner showing that it's loading, but it never loads.

Even if I remove the BTable from the dashboard, but it's still declared as a component in the dashboard, it still behaves the same. The issue is only resolved if i totally delete it from the layout as well as removing it as a component.

Any ideas on what could be going on with including the BTable in an IFrame?

Why I can't refresh repository cache?

$
0
0
I'm newer to Pentaho.I want to refresh repository cache.But it dosen't work.BI Platform version which I use is 3.5.0.What can I do to let refresh repository cache work?

Creating excel report from Pentaho

$
0
0
I need report in following format:

Heading of Report in Row 1

Then the data with data headers from row2 onwards

how to transform the following sql to pentaho transformation

$
0
0
how to transform it into pentaho transformation

CREATE TABLE IF NOT EXISTS table
(
`col1` VARCHAR(7) DEFAULT NULL,
`col2` VARCHAR(200) DEFAULT NULL,
`col3` VARCHAR(50) DEFAULT NULL,
`col4` DATE,
`col5` VARCHAR(100)
-- `col6` VARCHAR(100)
);

Shallow copy of Instances

$
0
0
hi,

I'm a bit confused about using add with Instances and hope someone can clarify. The API says that the add method in Instances
"Adds one instance to the end of the set. Shallow copies instance before it is added."
I understand shallow copy to mean that it in fact just copies the reference to the original Instance, but it seems I am wrong, or the API uses a different meaning to shallow copy than me. If I do the below, for example, all the class values in second are set to -1, but not in first. Have I misunderstood the API or am misusing it? It would explain a lot if in fact deep copies (or what I mean to be deep copies) are being made

Instances first=ClassifierTools.loadData(DataSets.uciPath+"vowel\\vowel");
//Create an empty set of instances with header of all
Instances second=new Instances(first,0);
//I want to make a shallow copy of all into second. i.e. copy references only
for(Instance ins:first ){
second.add(ins);
}
//Change the class label in second, should change the labels in first if a shallow copy
for(Instance ins:second)
ins.setClassValue(-1);
//Compare class labels. They are unchanged in first
for(int i=0;i<first.numInstances();i++)
System.out.println(" Class of first "+first.instance(i).classValue()+" class of second ="+second.instance(i).classValue());

sorry if this is well known, I have looked but have not found any mention of it.

Pentaho User Console - Pull data from excel

$
0
0
Hello Experts,

I am a newbie to Pentaho User Console, I am trying to pull data from excel.

I could see only CSV in the data source, is there any way to pull data from excel using Pentaho User Console? .

Thanks for your guidance in advance.

Thank you!
Sangeetha

Prediction - Naive - J48 classifier

$
0
0
I am doing college assignment I have weather data sets of various stations on closed by. Using station X and station Y, predict the values of Station Z.
I have CSV files.


StationX.csv as follows


Date time,Station name,Temp
22/04/2015 11:10,X,12.5
22/04/2015 11:20,X,12.6
22/04/2015 11:30,X,12.4
22/04/2015 11:40,X,12.5
22/04/2015 11:50,X,12.2
22/04/2015 12:00,X,12.1
22/04/2015 12:10,X,12.2
22/04/2015 12:20,X,12.8
22/04/2015 12:30,X,12.9
22/04/2015 12:40,X,12.2


StationY.csv as follows


Date time,Station name,Temp
22/04/2015 11:10,Y,12.1
22/04/2015 11:20,Y,12.4
22/04/2015 11:30,Y,12.2
22/04/2015 11:40,Y,12.5
22/04/2015 11:50,Y,12.2
22/04/2015 12:00,Y,12.3
22/04/2015 12:10,Y,12.4
22/04/2015 12:20,Y,12.6
22/04/2015 12:30,Y,12.3
22/04/2015 12:40,Y,12.4


StationZ.csv as follows


Date time,Station name,Temp
22/04/2015 11:10,Z,
22/04/2015 11:20,Z,
22/04/2015 11:30,Z,
22/04/2015 11:40,Z,
22/04/2015 11:50,Z,
22/04/2015 12:00,Z,
22/04/2015 12:10,Z,
22/04/2015 12:20,Z,
22/04/2015 12:30,Z,
22/04/2015 12:40,z,




How do i approach solving this problem using weka (Naive Bayes and J48).




Thanks,
Pavan

Prediction - Naive - J48 classifier - WEKA

$
0
0
I am doing college assignment I have weather data sets of various stations on closed by. Using station X and station Y, predict the values of Station Z.
I have CSV files.


StationX.csv as follows


Date time,Station name,Temp
22/04/2015 11:10,X,12.5
22/04/2015 11:20,X,12.6
22/04/2015 11:30,X,12.4
22/04/2015 11:40,X,12.5
22/04/2015 11:50,X,12.2
22/04/2015 12:00,X,12.1
22/04/2015 12:10,X,12.2
22/04/2015 12:20,X,12.8
22/04/2015 12:30,X,12.9
22/04/2015 12:40,X,12.2


StationY.csv as follows


Date time,Station name,Temp
22/04/2015 11:10,Y,12.1
22/04/2015 11:20,Y,12.4
22/04/2015 11:30,Y,12.2
22/04/2015 11:40,Y,12.5
22/04/2015 11:50,Y,12.2
22/04/2015 12:00,Y,12.3
22/04/2015 12:10,Y,12.4
22/04/2015 12:20,Y,12.6
22/04/2015 12:30,Y,12.3
22/04/2015 12:40,Y,12.4


StationZ.csv as follows


Date time,Station name,Temp
22/04/2015 11:10,Z,
22/04/2015 11:20,Z,
22/04/2015 11:30,Z,
22/04/2015 11:40,Z,
22/04/2015 11:50,Z,
22/04/2015 12:00,Z,
22/04/2015 12:10,Z,
22/04/2015 12:20,Z,
22/04/2015 12:30,Z,
22/04/2015 12:40,z,




How do i approach solving this problem using weka (Naive Bayes and J48).




Thanks,
Pavan

WEKA - Naive bayes - J48

$
0
0
I am doing college assignment I have weather data sets of various stations on closed by. Using station X and station Y, predict the values of Station Z.
I have CSV files.


StationX.csv as follows


Date time,Station name,Temp
22/04/2015 11:10,X,12.5
22/04/2015 11:20,X,12.6
22/04/2015 11:30,X,12.4
22/04/2015 11:40,X,12.5
22/04/2015 11:50,X,12.2
22/04/2015 12:00,X,12.1
22/04/2015 12:10,X,12.2
22/04/2015 12:20,X,12.8
22/04/2015 12:30,X,12.9
22/04/2015 12:40,X,12.2


StationY.csv as follows


Date time,Station name,Temp
22/04/2015 11:10,Y,12.1
22/04/2015 11:20,Y,12.4
22/04/2015 11:30,Y,12.2
22/04/2015 11:40,Y,12.5
22/04/2015 11:50,Y,12.2
22/04/2015 12:00,Y,12.3
22/04/2015 12:10,Y,12.4
22/04/2015 12:20,Y,12.6
22/04/2015 12:30,Y,12.3
22/04/2015 12:40,Y,12.4


StationZ.csv as follows


Date time,Station name,Temp
22/04/2015 11:10,Z,
22/04/2015 11:20,Z,
22/04/2015 11:30,Z,
22/04/2015 11:40,Z,
22/04/2015 11:50,Z,
22/04/2015 12:00,Z,
22/04/2015 12:10,Z,
22/04/2015 12:20,Z,
22/04/2015 12:30,Z,
22/04/2015 12:40,z,




How do i approach solving this problem using weka (Naive Bayes and J48).




Thanks,
Pavan

Running BI Server on MAC

$
0
0
Hello everyone.

Anyone knows where I can find documentation on installing/running BI server (community version) on MAC.

Many thanks

Where can I find the pdi-knowledgeflow-plugin-ee-deploy.zip plugin?

$
0
0
I'm not able to see the plugins tab in Knowledge Flow Editor in the KnowledgeFlow step. Where can I find the pdi-knowledgeflow-plugin-ee-deploy.zip plugin mentioned in the "Using the Knowledge Flow Plugin" wiki page?

Thanks.

Integrating CAS with Pentaho

$
0
0
I need to integrate CAS with Pentaho, but I'm not sure exactly how to go about doing this. I am using Pentaho Community Edition biserver-ce and I have the cas.war unpacked in my webapps directory, but (very newbie question) I'm not really sure how the actual linking of pentaho webapp with CAS occurs, such as with what files I need to change in order to have single sign on redirect from the pentaho login to the CAS login screen.

Which Javascript packages available in Kettle?

$
0
0
My knowledge of Javascript is mostly limited to what I've had to put together in Java Script transformation steps. Recently I had to make use of some code I found on the web involving httpClient using something like:

method = new Packages.org.apache.commons.httpclient.methods.GetMethod(url);

I'm wondering what these 'Packages' are. Is there a list of packages that are already included in Kettle? Where can I find more documentation on these?

Thanks,
Sterling

line chart start position

$
0
0
Hi ,


ccclinechart.jpg

The ccc line charts does draw from axis lines.

Please let me know the way for draw the line chart from axis lines rather than intermediate..

Thanks
Attached Images
Viewing all 16689 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>