Quantcast
Channel: Pentaho Community Forums
Viewing all 16689 articles
Browse latest View live

Issues with PDF export

$
0
0
Hello everyone

I'm having trouble with the PDF export option of the Pentaho Report Designer.

The report (in which I'm working) consists in a main report with a subreport placed into the Details section. I have also placed a Page of Pages function in the Master Report's Page Footer section.

While the "Print Preview" option works flawlessly (everything is displayed correctly), the resulting PDF export file neither shows the page of pages nor the contents of the subreport (the pages that belong to it are displayed without content except the one from the master report).

Are there any known issues with the pdf exportation ?
Do I have to configure certain attributes/styles of this elements in order to favour their correct PDF-rendering?

I'm using the 5.0.1 CE Pentaho Repor Designer version and Acrobat Reader 11.0.07

Thanks in advance!

LDAP Configuration issue in pentaho 5.0.5

$
0
0
I am new to Pentaho.

I tried to configure LDAP from authentication page in pentaho 5.0.5. I found the below error while saving the configuration.

Error : We couldnt find the Admin user {0}. Please try a different Admin user. BA Server message: {1}



I checked my fire bug console and found the below error

PUT http://localhost:8380/pentaho/api/ld...=1401278808776 500 Internal Server Error 247ms bootstrap.js (line 1343)
"NetworkError: 500 Internal Server Error - http://localhost:8380/pentaho/api/ldap/config/setAttributeValues?ts=1401278808776"
setAtt...8808776


Error: Unable to load ../../../../ldap/config/setAttributeValues?ts=1401278808776 status:500
(1237 out of range 568)

Can you please let me know what could be the issue? Can you please help me to resolve this issue?

Thanks
Nrusingh

On Shared Connections

$
0
0
Hi!

I'm studying the behaviour regarding shared connections. Can somebody confirm the following is true?

- When sharing connections, each of the jobs and transformations as soon as used make a local copy in its own file
- If shared connection is deleted, those can still use the same data configured thanks to that copy
- In case the shared connection is redefined (same name given), how it's handled the case? is there any sync?
- When does kettle syncs the local copy? If I run an scheudled ktr with pan, and that ktr uses a shared connection which I am modifying from the GUI on another file, will the scheduled transformation be aware?

[D3Component Library] can't be mapped to a valid class when creating a new component

$
0
0
Hi there,

I'm trying to create a new D3Component base on pedro Alves's last week post.

I still got the "CDF: Object type d3DataMap can't be mapped to a valid class" error in page log and nothing on server's log.
Is there a documentation about this error ?

Thanks.
NK

Report Designer Template

$
0
0
hello,

I have looked online for examples and tutorials about how to create report templates, but all I find is info about creating reports as such.

I have created a template and published it to the BI server to use it for interactive reporting, but I have several questions.

Such as: can I pre-define content such as titles, subtitles or a date on the report template? pentaho creates infos such as number of pages and report date, but I want to steer that from the template. is that possible?

one other important question I have: we are generating reports automatically using wget. can interactive reports be generated the same way?

has anyone of you a link to a good tutorial or documentation?

rgds,

uwe

Best way for batch conversion of Excel files with Java code

$
0
0
Hello, this is my first post on the forum - I've only been using Spoon for a few weeks. I'm an experienced Java developer working in Germany.

My question: I have a job that performs ETL on Excel sheets, extracting data and saving it to a PostgreSQL DB. So far so good. I have some Excel sheets that need extensive transformations first, such as deleting columns and empty rows, renaming and inserting header fields, filling in some missing values. This was formerly done via an Excel macro; I've written a Java class that uses Apache POI to do the job (much faster than the macro).

What's the best way of integrating this with my job? The Java code needs to read in the whole file to process it, so I'm not sure if a Step is the right way to go. Or should I write a custom plugin for the job?

My envisioned workflow would be giving the job/step a directory, where it then reads all the .xlsx/.xls files and performs the massive transformations, before moving on to the "easier" ETL steps. I've read as much as I could find about UDJC, but am really unsure if that's what I need to do, and if so how I implement processRow when I need to read in all the rows first.

Thanks in advance,

John

how to convert seconds to Time in format hh:mm:ss

$
0
0
this example doesn't work
<Measure name="Seconds" column="seconds"

aggregator="sum" >
<CellFormatter>
<Script>
var date = new Date(value * 1000);
return date.getUTCHours() + ":" + date.getUTCMinutes() + ":" + date.getSeconds();
</Script>
</CellFormatter>
</Measure>

How to work with Pentaho Analyzer

$
0
0
Hi,
Currently I am using JPivot as my OLAP BI Tool.
In jpivot,I am getting information like
JPivot is a community plug-in that has been provided for your convenience. If you are a Pentaho customer we encourage you to transition current Analysis Views to Pentaho Analyzer.

is there any other BI tools beside JPivot, pivot4j and Saiku ? Is pentaho analysis server another BI tool? if it is then how to use it .Can you please help me to use it providing steps to avail it in community edition.

regards
Sanjoy

Running reports on bi server with cassandra plugin issue

$
0
0
Hi,

After having a look in the internet I was unable to discover a way to make the cassandra connection enabled on the BI Server.

Basically I generated a Pentaho data integration transformation XML(test_cassandra.ktr) on keetle and used it on my local machine report designer.

I then deployed it to the server with BI Server and when I try to run it it gives me the following errors on the log:

..
..Caused by: org.pentaho.platform.api.repository2.unified.UnifiedRepositoryException: exception while getting file with path "/public/C:\Users\david.reis\Desktop\test_cassandra.ktr"
..15:20:56,420 ERROR [MetadataDatasourceService] Error import metadata: MetadataImportHandler.ERROR_0001 - !MetadataImportHandler.ERROR_0001_IMPORTING_METADATA! status = 1
15:20:56,420 ERROR [MetadataDatasourceService] Root cause: null
...

How can i enable the data source for cassandra on the BI Server?

I'm a beginner to pentaho so i apologize if this is a trivial problem :)

Thanks,
David

Saiku: conditional formatting, how to do it?

$
0
0
Hi there, we are using pentaho 5.0.1 CE Edition.

Is there some documentation (or a code snippet) for enabling conditional formatting on Saiku Table?
Thanks

Mondrian seems to ignore FORMAT_STRING on time dimension calculated members

$
0
0
Hi,

I have calculated member on the time dimension, to show a progression rate between two years. Of course, this rate should be expressed in %.
But it seems my FORMAT_STRING is simply ignored, and the format of the measure the calcMember is applied to is used instead (using JPivot).

Here is my calc member definition:
with member [Temps.Temps par mois].[TAUX_ATTEINT] as '([Temps.Temps par mois].[2013] / [Temps.Temps par mois].[2012])', FORMAT_STRING = "Percent", MEMBER_CAPTION = "Ratio"

When crossed with a measure with an integer format, it will display as an interger (so mostly 0).

If I create a calculated member on the Measures dimension, then the format is taken into account just fine.

Is the a known bug / limitation ? Is there a workaround ?

Thanks in advance,
Franck

HL7 Output

$
0
0
Hi All,

Is there such thing as a HL7 Output Step? I only ask, as I would like to use Pentaho as a hub for my HL7 Messages. I would like to import my messages, modify them and then send them off to the waiting application.

Thanks,
Ian.

Measures based on Dimension Column

$
0
0
Ok, I'm feeling REALLY n00b-ish here, because I can't seem to think my way out of this.

Assume a fact table (With Degenerate Dimensions):
Order# Placed Shipped PaidBy Client
1 2014-05-01 2014-05-03 Cash A
2 2014-05-01 2014-05-07 MC B
3 2014-05-02 2014-05-03 Visa A
4 2014-05-03 2014-05-09 Cash C

And that you want to be able to have output:
Date OrdersPlaced OrdersShipped
2014-05-01 2 0
2014-05-02 1 0
2014-05-03 1 2
etc.

But also:
Client
A B C
OrdersPlaced 2 1 1
OrdersShipped 2 1 1

And also:
Client
Method A B C
Cash 1 1
Visa 1
MC 1

How do I structure my schema?

I thought about converting it to a snowflake, but then I'd end up with two fact tables - Payments and OrdersByTime

Am I missing something really simple, or do we actually have to do that restructure, and then use 2 Cubes & a Virtual?

PDI 4.4 not utilizing CPU on live node.

$
0
0
Hi,

I am running a transformation that has 10 instances of UDJC step with 2 million input records on big server (24 core CPU with 64 GB ram), it is very very slow when compared to the execution on my laptop that has 4 core cpu and 8 GB ram.

mpstat -P ALL says all the core's are 98% idle while running the transformation on big machine.

Big Machine Congifuration
PDI Version: 4.4
Java Version: 1.7
Number of instances of UDJC step 10
Drive: Physical & Local (Not network mapped drive)

OS:
Linux ZHQSDP2A 2.6.32-358.28.1.el6.x86_64 #1 SMP Fri Nov 15 11:45:55 EST 2013 x86_64 x86_64 x86_64 GNU/Linux

Execution time: 45 mins for 2 million count


My Laptop
PDI Version: 4.4
Java Version: 1.6
Number of instances of UDJC step 3

OS:
Linux J0678R1 3.2.0-54-generic #82-Ubuntu SMP Tue Sep 10 20:08:42 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

Execution time: 20 mins for 2 million count

Please suggest me what need to be verified and the way it need to be nailed down.

The transformation is just (Read File ----> UDJC ---> Write to file)

UDJC --> does xslt transformation using saxon 9.0 version.

Can a entry in a fact table point to two entries in the same dimension ?

$
0
0
For example, consider this case :
A meeting M1 can have multiple participants P1, P2, P3.
How can i represent in star schema when i have a meeting fact table and participants dimension and meeting dimension.

I can only think of a fact table like this :

M1 Key, P1 key
M1 key, P2 key
M1 key, P3 key

This means, if a meeting has N participants, there will be N entries in the fact table.
Is this a right design ?
Do we have a better approach ?


Thanks

Can a entry in a fact table point to two entries in the same dimension ?

$
0
0
Not sure if this is the right forum, but anyway as it is related, i am posting here :
For example, consider this case :
A meeting M1 can have multiple participants P1, P2, P3.
How can i represent in star schema when i have a meeting fact table and participants dimension and meeting dimension.

I can only think of a fact table like this :

M1 Key, P1 key
M1 key, P2 key
M1 key, P3 key

This means, if a meeting has N participants, there will be N entries in the fact table.
Is this a right design ?
Do we have a better approach ?


Thanks,
Lakshmi

how to convert seconds to Time with format hh:mm:ss in modrian

$
0
0
I have a measure as Integer (it is seconds) and i would like convert this second in the format hh:mm:ss
it works with function Sec_to_time in query of mysql but MDX???

Using Pentaho for reporting on varied Metadata sources?

$
0
0
Hello,

Can Pentaho be used to report on the Metadata residing in Informatica repository and in models (native+custom) of Informatica Metadata Manager?
If it is possible, please advice where to dig next to explore the option further.

Thank you
V

Data Integration job with composite keys

$
0
0
All,

We have an HBase job running correctly with data-integration and it is extracting data from HBase into our MYSQL tables. We now want to create another job that will extract from another HBase table that has the following column family with a composite key. The structure in hbase looks like

TQ:1401216565661:1 column=d:vut, timestamp=1401173398283, value=DATA1
column=r:1:lqgbfp, timestamp=1401173398283, value=VALUE1

So most of our data is on column family d and we can get to this by mapping using column family d but when we use the HBase data-integration job and set the column to r under the d we don't get any data back into MYSQL.

How to alter order of fields?

$
0
0
This must be one of this stupid questions with an incredible easy answer ... but I cannot find how to specify the order of the fields that I need to use on a Table Input Step.

I know that the number of fields must match the number of '?' on my SQL statement. And for sure I can create a new field copying an existing one just to set the required order ... but this seems somehow illogical to me even if it's what I have been doing to solve the problem.

Today I was tired of creating fields and thought someone must have a better idea :D :D
Viewing all 16689 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>