Quantcast
Channel: Pentaho Community Forums
Viewing all 16689 articles
Browse latest View live

Only show steps execution messages on console.

$
0
0
Question from a noobie:

When I execute a job interactively, it shows the logs in the console.
The problem is that it shows a lot... and i need the detailed log in a file.

What i want to do is:
Get the log file using the kitchen's log parameter.
And in the console show a message for each step that is running...

For example:
Executing extraction...
Executing calculations...
Executing inserts....

and so on.

is there a way to accomplish that? Any components?

Thanks

Daniel

End Of Life-ing CDB!

$
0
0
I'm a huge fan of trying a lot of things. Throw ideas to the ecosystem and let popularity decide the ones that should survive, and the ones that don't.



CDB, Community Data Browse, ended up in the second category. While the idea is still very very valid (building up a central query repository), the execution wasn't... quite there.

We converted it to 5.x, but a recent upgrade in Saiku made CDB stopped working, and it's just not worthwhile for us to maintain it. We're EOL'ing it, and removed it from the marketplace. Old builds still available on the homepage.

If anyone wants to pick up the project... code's available on github, just let me know!


-pedro



More...

Locale specific currency

$
0
0
I'm having trouble getting a report to use the locale specific currency formatting in a report created from Report Designer (3.9.1). I'd like it to show $45.00 for the US, but ¥45 for Japan. Specifically, I have a field named Revenue and am struggling with what to put in Report Designer attributes to get this behavior.

Thanks in advance for any help you can give.

Designing sub reports using Java Method Invocation Datasource

$
0
0
Hi,

I am using java class as custom data source and able to invoke java method from the simple report. But while designing sub reports, I am trying to invoke java method from sub report by passing method required parameters from parent report. The method always receiving parameter values as null.

so It is possible to design sub reports using custom data source Java Method Invocation.

Property in level elemnt

$
0
0
Hi,
i'm new on Mondrian so i'm trying to understand how it works and its potential.
Reading the Pentaho manual about Mondrian, i found the use of Property tag in the Schema XML.

I'd like to know when the property tag is used, and how.

In my case, i have to create a dimension USER, that has not a hierarchy but only a list of attribute, in fact the table is this:

TABLE USER:

USNAME,NAME,SURNAME

The table has not primary key setting on the DB, but obviously the usname is unique in the table.
So What i expect, is to have the possibility to select one or more User attribute, in the same pivot table.

How can I define my dimension reacht my target?

I'm creating the Dimension User, with a level User and, for this level, I add Name, Surname as Property.

Code:

<Dimension type="StandardDimension" visible="true" name="User">
    <Hierarchy name="User" visible="true" hasAll="true" allMemberName="All" allMemberCaption="All" allLevelName="tot" primaryKey="USNAME" description="User">
      <Table name="users">
      </Table>
      <Level name="user" visible="true" table="users" column="USNAME" nameColumn="USNAME" type="String" uniqueMembers="true" levelType="Regular" description="Utente">
        <Property name="name" column="NAME" type="String" description="NAME">
        </Property>
        <Property name="surname" column="SURNAME" type="String" description="SURNAME">
        </Property>
      </Level>
    </Hierarchy>
  </Dimension>

In this way, I can't select one of thEME, if i use the User dimensio, the mondrian cube, shows only the usname, so i have the doubt that it's not the right way to define my schema.

Do you have any suggestion?
Regards
Alessandro

set javascript variable with a compound statement including if

$
0
0
m using Pentaho to move some csv data into xml data and I need a little javascript help.
im setting up groups of elements that will get compiled under larger elements later, one element has name information, but I want to leave out the middle name element if the csv file is null in that field.
var ind ='<ind><name><last>'+LNAME+'</last><first>'+FNAME+'</first>'+if(ISUBJ_MNAME==null){'<middle>'+ISUBJ_MNAME+'</middle>';}+'</name> </ind>'is there something glaringly wrong with this? I feel like it should work, but this is literally my first exposure to javascript.



Thank you

Brand new to AS/400 and Pentaho, can someone point me in the right direction?

$
0
0
Hello everyone,
These tools look amazing, and I can't wait to get started using them. I just started with a new company and want to show them some great things, though I have never used AS/400 before, is there any sort of tutorial or step-by-step I can follow on how to connect to an AS/400 and pull some data?

Thanks a ton!

Another UI/UX Analyzer bug in IE9

$
0
0
I have noticed another bug in the Analyzer in IE9. In the themes I have tested (Crystal, Onyx), the 'available fields' column cannot be scrolled to the bottom of the list. See attached. analyzer-IE9-column-scrolling-bug.jpg
This behavior occurs if the application is opened within the Pentaho application frame and not when opened as a separate window.
Attached Images

Rolling of customer data one year go

$
0
0
Hello members,

I have a requirement where I am

I am supposed to generate a report out of the postgres orders table and this report involves rolling of customers 1 year or 6 months ago.

Below is the schema:

Code:

create table ord    (
      cust_id VARCHAR(30),
      ord_id VARCHAR(30),
      item_qty int,
      item_iml_ship_qty int,
      item_extended_cost_amt numeric(18,2),
      item_extended_actual_price_amt numeric(18,2),
      ord_submitted_date date
      );
    INSERT INTO ord(cust_id, ord_id, item_qty,item_iml_ship_qty, item_extended_cost_amt, item_extended_actual_price_amt, ord_submitted_date)
    SELECT 'abcd1234', 'ord12034', 1, 1, 40, 100, '2011-01-01'::DATE
    UNION
    SELECT 'abcd1234', 'ord123457', 4, 4, 50, 100, '2009-10-12'::DATE
    UNION
    SELECT 'abcd1235', 'ord123458', 1, 1, 50, 120, '2010-10-01'::DATE
    UNION
    SELECT 'abcd1235', 'ord123459', 4, 4, 50, 100, '2010-12-31'::DATE
    UNION
    SELECT 'abcd1235', 'ord123467', 5, 5, 20, 130, '2012-01-01'::DATE
    UNION
    SELECT 'abcd1239', 'ord123487', 4, 4, 50, 100, '2013-07-01'::DATE
    UNION
    SELECT 'abcd1239', 'ord123454', 3, 3, 50, 80, '2014-01-01'::DATE
    UNION
    SELECT 'abcd1239', 'ord123456', 2, 2, 30, 60, '2014-06-01'::DATE
    UNION
    SELECT 'abcd1234', 'ord123456', 1, 1, 50, 100, '2014-08-01'::DATE; `


This is the query that I have generated


 


      WITH ord_cte as
(
SELECT EXTRACT(YEAR FROM o.ord_submitted_date)ord_yy , o.cust_id,
            o.ord_id,
         
            count(DISTINCT o.cust_id) AS unique_pkey_customer_count,
       
            COALESCE(sum(
                CASE
                    WHEN COALESCE(o.item_qty, 0) >= COALESCE(o.item_iml_ship_qty, 0) THEN COALESCE(o.item_qty, 0)
                    ELSE COALESCE(o.item_iml_ship_qty, 0)
                END), 0::bigint) AS "Unit_Sales_Count",
            COALESCE(sum(o.item_extended_cost_amt), 0::numeric) AS "Lifetime COS",
            COALESCE(sum(o.item_extended_actual_price_amt), 0::numeric)::numeric(18,2) AS "Gross_Revenue_Amt",
         
            COALESCE(sum(o.item_extended_actual_price_amt) - sum(o.item_extended_cost_amt), 0::numeric)::numeric(18,2) AS "Gross_Profit_Amt",
            count(DISTINCT o.ord_id) AS "Total_LifeTime_No_Of_Orders",
       
            COALESCE(count(DISTINCT o.ord_id)::double precision / count(DISTINCT o.cust_id)::double precision, 0::double precision) AS "Lifetime_Ord_Per_Cust"
           
          FROM ord o LEFT JOIN ord d ON d.ord_submitted_date Between O.ord_submitted_date - INTERVAL '365 days' and o.ord_submitted_date
          group by o.ord_submitted_date, o.cust_id,
            o.ord_id
  )


My code might look complicated but I am required to put this in kettle which wouldn't be a problem but I need some guidance on how to do this on how to calculate the average one year spend and avg 6 months spend of customer.


Please I need your help.

Thanks,

Ron

expanded sub-table search box loses focus after each character typed

$
0
0
When a table is the target of an expansion in the datatable component, the search box in the 'subtable' loses focus after each character typed.

Can someone help me verify this?

To replicate:

create dashboard
create a table
make the table expandable
create another table with search enabled.
click on a row to expand and show the second table
every time you type in the search box in the second table the box loses focus as the search completes.

Obviously more is involved than the steps I listed, but creating a tutorial on expanding tables has been done before.

fuzzy and exact match combined?

$
0
0
I've two lists (one from mysql database), both has country code and a string field (say organisation). I want to match 'same' organisations. An organisation is the same if: country code is the same and the names are almost the same. So I want to perform a fuzzy match taken the country into consideration. Any suggestions how to do it? (I tried a fuzzy match with the 'get closer value' not checked, but that does not resulted in multiple rows (already filed a jira))

Not Able To Upload XMI file To BI-Server

$
0
0
Hello Xpertz....

I have already posted this thread in metadata editor forum, but was not able to figure out that to which forum should i post so posting here again.

I have created domain file in Pentaho Metadata editor, i have published successfully that file in the server, but i am facing error while uploading XMI file to server.

Import File Level Message
/home/ord.xmi ERROR org.pentaho.platform.plugin.services.importer.PlatformImportException: Bundle missing required domain-id property


Please provide me a solution as i m stuck at this point.

Thanks in advance

Sparlk error handling

$
0
0
Hello

Im been trying the new sparkl plugin creator, ive succesfully created and consumed a kettle endpoint, im wondering how a transformation exception is handled by the sparkl plugin, upon a runtime error the translformation url endpoint shows a blank page in the browser.
Is there any way to get the kettle error log on exception, something like CentralLogStore.getAppender().getBuffer(trans.getLogChannelId(), false).toString() as the plugin output on error event, or alternatively is there any way to show the error root cause to the sparkl plugin end user.

Thanks

Not able to setup max memory more than 1400M for Weka (3.6 or 3.7) 64 bit Windows 8

$
0
0
Hello,

I am trying to allot more memory for Weka 3.6.11 or 3.7.11. Both are installed as 64 bit in Windows 8 64 bit. The system has 16gb of RAM and 8 core processors.

For that, i am editing RunWeka.ini file in Weka installation folder. In that file, the default value was #maxheap=1024M.

I tried to change it to 4024M but Weka wont start.

After a lot of trial and error, and gradually reducing the value, max i can setup is #maxheap=1400M to start the weka.

Can somebody help me in understanding the above behavior?

Thanks,

Ajit Singh

MongoDB Query Aggregation pipeline Feature

$
0
0
Hi,

On pentaho 5.1.0 PDI , we are using a MongoDB Input Step connecting to 2.6.4 mongodb , and using the MongoDB Query Aggregation pipeline Feature with $unwind, but it throws error
: MongoDB Input.0 - { "serverUsed" : "localhost/127.0.0.1:****" , "errmsg" : "exception: aggregation result exceeds maximum document size (16MB)" , "code" : 16389 , "ok" : 0.0}

Has the MongoDB Input Step been fixed to support not having the 16MB aggregation result limit ?

Thanks ..

Weka J48 Stoping Criteria

$
0
0
Hello,

I am a beginner WEKA user. I have been searching for a solution to my problem for a week but could not find any answer.

Here is my question:
Is there any way to set the Accuracy as the stopping criteria during building the J48 tree?
I want WEKA to stop building the J48 tree when it reaches for instance 80% of accuracy. Is this possible?

I would appreciate it if you could help me in this? I really need an urgent answer.

Thank you a lot in advance.
Asli

How to set detailed logging information to Email when error occured in job schduling

$
0
0
In pentaho [PDI(Kettle)] 5.1 version after scheduling job i have set email to the transformation in job if job fails email with logging information has to send to the mail and if job success success logging information has to send mail.If i want to see detailed log information what happening during scheduling. Instead of attaching the log file the information should send to the email .How to set that please help me in this issue

E:\PENTAHO\data-integration\Kitchen.bat /file:E:\PENTAHO\rml_app_profile.kjb ./rml_app_profile.kjb.log


Database Connection Test

$
0
0
I am getting the error:

Error occured while trying to connect to the database


Driver class 'oracle.jdbc.driver.OracleDriver' could not be found, make sure the 'Oracle' driver (jar file) is installed.
oracle.jdbc.driver.OracleDriver


I am on a Mac running Yosemite. I am running SQLDeveloper, which works fine. I tried following an older link to get the JDBC driver from Oracle but its outdated.

How do I get the driver to fix this issue?

Thanks!
Bill

Unable to Query Hive db

$
0
0
Hi,

I've downloaded the AWS Hive JDBC driver following these instructions.
http://docs.aws.amazon.com/ElasticMa...DBCDriver.html

Hadoop distribution:Amazon 2.4.0
Applications:Hive 0.13.1, Pig 0.12.0, Hue
Hive version: 0.13.1

I've copied the JDBC 3 jar files into
report-designer\lib\jdbc

If I run the Pentaho Report Designer Wizard and setup a Data Source (using SSH tunnel and local port forwarding as per AWS instructions)
And tes the connection I get:
Connection to database is OK

However when I try and use the data source by previewing a Static Query I get an exception:
org.pentaho.reporting.engine.classic.core.ReportDataFactoryException: Failed at query: select * from my_table
Caused by: java.sql.SQLException: Driver Manager returned no connection. Your java-implementation is weird.

Any ideas?

Mike

New GENERIC database connection add error

$
0
0
Hi.
I'm using PDI since 4.3 - always with Pervasive DB (10.30) connected using JDBC as Generic Database.
I've upgraded my PDI to 5.2.
Last days I tried to connect to my repository from another PC - I have problems with it - http://forums.pentaho.com/showthread...-to-repository
I decided to find where's the problem so I started with new repository (on PostgreSQL like earlier). On new repository I have another problem - when I try to add new database connection to my Pervasive using JDBC and Generic Database it doesn't save it to repository database. Everything works when creating connection and testing it, after I close connection properties and open it again my connection URL and driver class name are empty.

Zrzut_20141124_105730.jpg

After reload:
Zrzut_20141124_105748.jpg
Attached Images
Viewing all 16689 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>