Quantcast
Channel: Pentaho Community Forums
Viewing all 16689 articles
Browse latest View live

Reporting bug on second print

$
0
0
Hi to all,
there's a strange bug in the latest BA-SERVER platform (tested in 7.1 and 8.1)

Step to reproduce:
- publish a report with parameter and set the default output type as Excel (or even others file type output "application/rtf", "application/vnd.ms-excel", "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", "text/csv", "mime-message/text/html")
- access the ba-server but not with localhost:8080 so for example 10.232.1.49:8080/pentaho
- open the report and set the parameter without touching output format
- click view report for downloading the xls file
- change the parameter and click view report again
Result is white page and no file downloaded, you have to refresh the page to start back from pentaho home.

Strangely enough if you access pentaho through localhost it works with the latest version of firefox but not with chrome.
If you change the output type to a non-file output (so maybe html paginated) and you click view report, than you change back to excel and view report it works all the time.

The problem is totally in the html/js page, because the server does not log anything, you can clearly see that the report is acquired through some GET and a final POST that start the file download.
In the second run the report is correctly called, but the final POST is never sended from the client (the web page)

I think the error is on reportviewer-app.js, maybe something when is called "_isDownloadableFormat" and timeout is checked...
In BA-SERVER 6.1 everything works well, but when reportviewer-app.js were completely refactored since version 7 the problem is introduced.

Any hints for fixing this?

Thank to all

How to remove extra pipes in text sources

$
0
0
Hello,

I hope someone can help me.

I have a text source file, let says with this structure separated with pipes (|):

ID|COLOR|MODEL|VEHICLE

And I need to save the data to a table, but some times the data comes with some extra pipes and this causes the data to move one or several places, for example:

Code:

1|RED|2014|VW

2||RED|2013|GM

3|BLUE||2017||||GM

4|GOLD||2016|||GM|

5|GOLD|  |2016| ||GM|

Is there a way to remove those extra pipes to accomodate the data?

Any help would be appreciated!

Thanks!

java.net.SocketException: Connection reset error when trying to connect to Salesforce

$
0
0
Hello
My organisation currently utilizes Pentaho Kettle 5.0.1 to upload and map data into Salesforce.
We have been using TLS 1.2 encryption protocol since last November and the Java version is up to date.
Since the end of April we have been unable to connect to Salesforce and have been getting an error message “java.net.SocketException: Connection reset”
Any ideas on how to resolve this issue?
Thanks
Jeremy

ERROR: Kitchen can't continue because the job couldn't be loaded

$
0
0
Hi everybody,
I have an issue running Kitchen on a Unix machine.

I currently have the following folder structure (very simplicistic, however that's the core of it):
/data-integration
kitchen.sh
/.kettle
repositories.xml
/repo
Job1.kjb

The repositories is set as following:
<repository> <id>FileRepo</id>
<name>my-repo</name>
<description>File repository</description>
<is_default>true</is_default>
<base_directory>/data-integration/repo</base_directory>
<read_only>N</read_only>
<hides_hidden_files>Y</hides_hidden_files>
</repository>

I'm trying to run the Job with Kitchen as following:
sh kitchen.sh -repo="my-repo" -job="Job1.kjb" -dir="/repo"

it seems all correct given the user guide on the website, however I receive the error message in the title.

Any suggestion?

Many thanks in advance!!!

How much Memory required to generate report from 2 Billion records

$
0
0
Hello

Wanted to know How much Memory required to generate report from 2 Billion records?

Have 2 billions records but not able to complete MDX query in few minutes. Any performance tuning can do in it.

I am using Postgres DB and have work_mem of 12MB.

Dynamic Update

$
0
0
I have a file which has 150000 update statements. what is the best way to execute it considering the performance and other factors in mind.

I tried Execute Sql Script which is pretty slow. for 30000 records it took 75 mins.

Sample as below

update table1 set f1='n',f2='Harris',f3='Newyork' where id='ID1';
update table1 set f2='n',f3='US',f4='Newyork' where id='ID2';
update table1 set f1='n',f3='23456',f6='atlanta' where id='ID3';
update table1 set f3='n',f6='citizen',f10='florida' where id='ID4';
.
.
.
Please advise.

How to change default admin password on BI Server?

$
0
0
Hi there.

I am trying to update admin's password through PUC Administration area but changes have no effect after all. I am using version 8.0 and couldn't find an answer on Google for newer version but only for 3.* version.

Thanks in advance.

Hadoop shim ConfigurationException when running Kitchen

$
0
0
Hi everybody,
I'm trying to run kitchen on a Unix server, I do have a file repository which seems to be recognized correctly, however I can't run the Job because of this error:

java.lang.NoClassDefFoundError: org/pentaho/hadoop/shim/ConfigurationException
org/pentaho/hadoop/shim/ConfigurationException
ERROR: Kitchen can't continue because the job couldn't be loaded.

Just for knowledge, I've developed jobs and transformations with Kettle v.8, on the server I do have Kettle v.7, I strongly hope this is not the real issue.

Anyone has any clue regarding this?

Thanks in advance!

Protovis Component + RequireJS + version 8.0 CE = Broken

$
0
0
When using the Protovis Component and RequireJS in version 8.0 CE the component is rendered but the spinning wait icon overlay never goes away and dashboard building halts. I had this Protovis Component's priority set to execute first and none of the subsequent components executed (as shown in the console log). Actually, the Post Execution function for the Protovis Component is never executed. The same dashboard worked fine in version 7.1 CE.

If RequireJS support is removed the Protovis Component and dashboard work properly.

Responsive Bar Chart

$
0
0
Hi,
I am trying to plot responsive bar chart in PRD.
I am able to plot data on chart.
But as my data is very large I am not able to see it properly.
Need some Zoom In - Zoom Out option in this case
Kindly Support.
Have a good day.


Thanks

SalesForceInput

$
0
0
Hi,

Have a doubt regarding the Sales Force Input step Timeout.

My Jobs are failing with the below error.
com.sforce.ws.ConnectionException: Request to https://xxxxxxxxxxxxxxxx/services/Soap/u/24.0/xxxxxxxxxx timed out. TimeTaken=60134 ConnectionTimeout=60000 ReadTimeout=60000

Could you please let me know the Timeout default (60000 ms) mentioned inside the Step Additional Fields points to Only Establishing the connection with sales force or Establishing the connection and also getting the data from Salesforce?

This is what given in the Help description.
Time out
Configure the timeout interval in milliseconds before the step times out.

Pentaho 6.1
Please advise.

Pentaho Enterprise Version Comparisions

$
0
0
Hi Team,

Its been a while that i am working on Pentaho 6.3. As the latest version is 8.0, I would like to know the enhancements or updates or new steps that are provided in the latest version comparitively. Can some one please shed some light

Thanks In Advance,
Santosh

Mail step Attached files not working

$
0
0
Hi everybody,

For a logging purpose, i need to send some logging files by mail. Using the mail step when errors occuring, i've checked the "specify logfile" option with some log level selected.

The mail is correctly sent, but i've not attached file.

Is there anybody facing the same issue? I'm using the PDI 7.1 version.

Thank you in advance.

Group By - What is best way to Concatenate strings separated by , - but UNIQUE vals?

$
0
0
All -

I use Group By and have many aggregate fields that contain "duplicate" values. Since I have multiple columns that will aggregrate arrays, I can not simply sort and do a unique steps.

Example:

hostname,column1, column2,column3
host123,"a,b,b,c,d,d,","e,e,f,f,g","x,y,y,z"
...

I would like to have the following:

hostname,column1, column2,column3
host123,"a,b,c,d,","e,f,g","x,y,z"

Thanks in advance for suggestions!

KP

How to debug a set variable in spoon without error (parent job not available)

$
0
0
I have a transformation that is doing some validation. If the result is true, a row is sent to a set variable step. That step will set a constant field 'TRUE' with the value 'Y' to a variable. If the input is not validated, the step will not receive any rows and instead set default value 'N'.

This works fine in the program, but whenever this individual transformation is debugged in spoon, set variable step crash because:

Can't set variable [VariableName] on parent job: the parent job is not available

Any good solution to this?

Carte Explanation

$
0
0
Hi Guys,
I have a little question that I don't find on the web pages.
I need to undestand how works Carte technically.
Can you help me with this ?

(Really I have a confusion If Carte work like MPP or not).

Regards

Hadoop Shim & User Home Issues

$
0
0
Hi All,
I'm facing errors while accessing hadoop file system. Firstly when i test shims it shows like below in spoon.log.

2018/06/06 07:49:13 - Named cluster: quickstart - Since no port is set, we assume that High Availability has been enabled for quickstart.cloudera:8020.
2018/06/06 07:49:13 - Named cluster: quickstart - User Home Directory Access: FATAL We couldn't run test User Home Directory Access.
2018/06/06 07:49:13 - Named cluster: quickstart - null
2018/06/06 07:49:13 - Named cluster: quickstart - java.lang.reflect.UndeclaredThrowableException
2018/06/06 07:49:13 - Named cluster: quickstart - at com.sun.proxy.$Proxy68.getFileSystem(Unknown Source)
2018/06/06 07:49:13 - Named cluster: quickstart - at com.pentaho.big.data.bundles.impl.shim.hdfs.HadoopFileSystemFactoryImpl$1.getFileSystem(HadoopFileSystemFactoryImpl.java:76)
2018/06/06 07:49:13 - Named cluster: quickstart - at com.pentaho.big.data.bundles.impl.shim.hdfs.HadoopFileSystemImpl.getFileSystem(HadoopFileSystemImpl.java:219)
2018/06/06 07:49:13 - Named cluster: quickstart - at com.pentaho.big.data.bundles.impl.shim.hdfs.HadoopFileSystemImpl.getHomeDirectory(HadoopFileSystemImpl.java:145)
2018/06/06 07:49:13 - Named cluster: quickstart - at org.pentaho.big.data.impl.cluster.tests.hdfs.ListDirectoryTest.runTest(ListDirectoryTest.java:98)
2018/06/06 07:49:13 - Named cluster: quickstart - at Proxyf676ed56_bc69_4044_8ecd_a061ddb7c077.runTest(Unknown Source)
2018/06/06 07:49:13 - Named cluster: quickstart - at Proxy90a9e46e_d4b3_40d3_b6d0_c7b4df36b92d.runTest(Unknown Source)
2018/06/06 07:49:13 - Named cluster: quickstart - at org.pentaho.runtime.test.test.impl.RuntimeTestDelegateWithMoreDependencies.runTest(RuntimeTestDelegateWithMoreDependencies.java:71)
2018/06/06 07:49:13 - Named cluster: quickstart - at org.pentaho.runtime.test.impl.RuntimeTestRunner.runTest(RuntimeTestRunner.java:180)
2018/06/06 07:49:13 - Named cluster: quickstart - at org.pentaho.runtime.test.impl.RuntimeTestRunner.access$000(RuntimeTestRunner.java:49)
2018/06/06 07:49:13 - Named cluster: quickstart - at org.pentaho.runtime.test.impl.RuntimeTestRunner$1.run(RuntimeTestRunner.java:237)
2018/06/06 07:49:13 - Named cluster: quickstart - at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
2018/06/06 07:49:13 - Named cluster: quickstart - at java.util.concurrent.FutureTask.run(Unknown Source)
2018/06/06 07:49:13 - Named cluster: quickstart - at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
2018/06/06 07:49:13 - Named cluster: quickstart - at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
2018/06/06 07:49:13 - Named cluster: quickstart - at java.lang.Thread.run(Unknown Source)
2018/06/06 07:49:13 - Named cluster: quickstart - Caused by: java.lang.reflect.InvocationTargetException
2018/06/06 07:49:13 - Named cluster: quickstart - at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2018/06/06 07:49:13 - Named cluster: quickstart - at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
2018/06/06 07:49:13 - Named cluster: quickstart - at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
2018/06/06 07:49:13 - Named cluster: quickstart - at java.lang.reflect.Method.invoke(Unknown Source)
2018/06/06 07:49:13 - Named cluster: quickstart - at org.pentaho.hadoop.shim.HadoopConfiguration$1.invoke(HadoopConfiguration.java:146)
2018/06/06 07:49:13 - Named cluster: quickstart - ... 16 more
2018/06/06 07:49:13 - Named cluster: quickstart - Caused by: java.lang.NullPointerException
2018/06/06 07:49:13 - Named cluster: quickstart - at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
2018/06/06 07:49:13 - Named cluster: quickstart - at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:176)
2018/06/06 07:49:13 - Named cluster: quickstart - at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:173)
2018/06/06 07:49:13 - Named cluster: quickstart - at java.security.AccessController.doPrivileged(Native Method)
2018/06/06 07:49:13 - Named cluster: quickstart - at javax.security.auth.Subject.doAs(Unknown Source)
2018/06/06 07:49:13 - Named cluster: quickstart - at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
2018/06/06 07:49:13 - Named cluster: quickstart - at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:173)
2018/06/06 07:49:13 - Named cluster: quickstart - at org.pentaho.hadoop.shim.common.CommonHadoopShim.getFileSystem(CommonHadoopShim.java:228)
2018/06/06 07:49:13 - Named cluster: quickstart - at org.pentaho.hadoop.shim.common.delegating.DelegatingHadoopShim.getFileSystem(DelegatingHadoopShim.java:101)
2018/06/06 07:49:13 - Named cluster: quickstart - ... 21 more
2018/06/06 07:49:13 - Named cluster: quickstart - Root Directory Access: FATAL We couldn't run test Root Directory Access.
2018/06/06 07:49:13 - Named cluster: quickstart - null
2018/06/06 07:49:13 - Named cluster: quickstart - java.lang.reflect.UndeclaredThrowableException
2018/06/06 07:49:13 - Named cluster: quickstart - at com.sun.proxy.$Proxy68.getFileSystem(Unknown Source)
2018/06/06 07:49:13 - Named cluster: quickstart - at com.pentaho.big.data.bundles.impl.shim.hdfs.HadoopFileSystemFactoryImpl$1.getFileSystem(HadoopFileSystemFactoryImpl.java:76)
2018/06/06 07:49:13 - Named cluster: quickstart - at com.pentaho.big.data.bundles.impl.shim.hdfs.HadoopFileSystemImpl.getFileSystem(HadoopFileSystemImpl.java:219)
2018/06/06 07:49:13 - Named cluster: quickstart - at com.pentaho.big.data.bundles.impl.shim.hdfs.HadoopFileSystemImpl$9.call(HadoopFileSystemImpl.java:126)
2018/06/06 07:49:13 - Named cluster: quickstart - at com.pentaho.big.data.bundles.impl.shim.hdfs.HadoopFileSystemImpl$9.call(HadoopFileSystemImpl.java:124)
2018/06/06 07:49:13 - Named cluster: quickstart - at com.pentaho.big.data.bundles.impl.shim.hdfs.HadoopFileSystemImpl.callAndWrapExceptions(HadoopFileSystemImpl.java:208)
2018/06/06 07:49:13 - Named cluster: quickstart - at com.pentaho.big.data.bundles.impl.shim.hdfs.HadoopFileSystemImpl.listStatus(HadoopFileSystemImpl.java:124)
2018/06/06 07:49:13 - Named cluster: quickstart - at org.pentaho.big.data.impl.cluster.tests.hdfs.ListDirectoryTest.runTest(ListDirectoryTest.java:103)
2018/06/06 07:49:13 - Named cluster: quickstart - at Proxyf676ed56_bc69_4044_8ecd_a061ddb7c077.runTest(Unknown Source)
2018/06/06 07:49:13 - Named cluster: quickstart - at Proxy90a9e46e_d4b3_40d3_b6d0_c7b4df36b92d.runTest(Unknown Source)
2018/06/06 07:49:13 - Named cluster: quickstart - at org.pentaho.runtime.test.test.impl.RuntimeTestDelegateWithMoreDependencies.runTest(RuntimeTestDelegateWithMoreDependencies.java:71)
2018/06/06 07:49:13 - Named cluster: quickstart - at org.pentaho.runtime.test.impl.RuntimeTestRunner.runTest(RuntimeTestRunner.java:180)
2018/06/06 07:49:13 - Named cluster: quickstart - at org.pentaho.runtime.test.impl.RuntimeTestRunner.access$000(RuntimeTestRunner.java:49)
2018/06/06 07:49:13 - Named cluster: quickstart - at org.pentaho.runtime.test.impl.RuntimeTestRunner$1.run(RuntimeTestRunner.java:237)
2018/06/06 07:49:13 - Named cluster: quickstart - at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
2018/06/06 07:49:13 - Named cluster: quickstart - at java.util.concurrent.FutureTask.run(Unknown Source)
2018/06/06 07:49:13 - Named cluster: quickstart - at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
2018/06/06 07:49:13 - Named cluster: quickstart - at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
2018/06/06 07:49:13 - Named cluster: quickstart - at java.lang.Thread.run(Unknown Source)
2018/06/06 07:49:13 - Named cluster: quickstart - Caused by: java.lang.reflect.InvocationTargetException
2018/06/06 07:49:13 - Named cluster: quickstart - at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2018/06/06 07:49:13 - Named cluster: quickstart - at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
2018/06/06 07:49:13 - Named cluster: quickstart - at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
2018/06/06 07:49:13 - Named cluster: quickstart - at java.lang.reflect.Method.invoke(Unknown Source)
2018/06/06 07:49:13 - Named cluster: quickstart - at org.pentaho.hadoop.shim.HadoopConfiguration$1.invoke(HadoopConfiguration.java:146)
2018/06/06 07:49:13 - Named cluster: quickstart - ... 19 more
2018/06/06 07:49:13 - Named cluster: quickstart - Caused by: java.lang.NullPointerException
2018/06/06 07:49:13 - Named cluster: quickstart - at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
2018/06/06 07:49:13 - Named cluster: quickstart - at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:176)
2018/06/06 07:49:13 - Named cluster: quickstart - at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:173)
2018/06/06 07:49:13 - Named cluster: quickstart - at java.security.AccessController.doPrivileged(Native Method)
2018/06/06 07:49:13 - Named cluster: quickstart - at javax.security.auth.Subject.doAs(Unknown Source)
2018/06/06 07:49:13 - Named cluster: quickstart - at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
2018/06/06 07:49:13 - Named cluster: quickstart - at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:173)
2018/06/06 07:49:13 - Named cluster: quickstart - at org.pentaho.hadoop.shim.common.CommonHadoopShim.getFileSystem(CommonHadoopShim.java:228)
2018/06/06 07:49:13 - Named cluster: quickstart - at org.pentaho.hadoop.shim.common.delegating.DelegatingHadoopShim.getFileSystem(DelegatingHadoopShim.java:101)
2018/06/06 07:49:13 - Named cluster: quickstart - ... 24 more
2018/06/06 07:49:13 - Named cluster: quickstart - Verify User Home Permissions: SKIPPED This test was skipped because User Home Directory Access was not successful.
2018/06/06 07:49:13 - Named cluster: quickstart - The Verify User Home Permissions test was skipped because test [hadoopFileSystemListHomeDirectoryTest] was not successful.
2018/06/06 07:49:13 - Named cluster: quickstart - Cluster Test: Map Reduce
2018/06/06 07:49:13 - Named cluster: quickstart - Ping Job Tracker / Resource Manager: FATAL Hostname is required.



I'm using cloudera quick start VM for testing.
OS:Windows
Pentaho: 8 Community edition.

Any suggestions will be helpful.


Regards,
G.Sujay.

reporting bug on latest version of pentaho - please vote for resolution

Pentaho : BUG admin UI wont display users list

$
0
0
I replicated a pentaho installation on new virtual machines (1 Tomcat server with pentaho 5.2 and 1 postgresql 9.3, exactly the same OS...).

Everything works, we can login, work, etc... except admin UI who can get XML response of the user list, it's an empty XML list... the API URL call by UI is : https://domain.name/pentaho/api/userroledao/users
Can you help me to find the next step to resolve the problem ? Thanks

Pic of XML API result call by admin UI :

Dashboards - SqlOverJndi parameter error with MariaDB

$
0
0
I get an excecution error on a CDE Dashboard using a modified demo of Pentaho CE 8.1 to work with MariaBD. The sql over jndi runs ok without parameters but adding one results in an error.

It can be reproduced it by running the CDA sample test (/public/plugins-samples/CDA/cda-test).

- Pentaho 8.1 CE modified to use MySQL
- MariaDB 10.1.29
- Ubuntu 18.04 64 bits

Does anyone have a suggestion to fix this?

(Thanks for your time reading this)

- Javier
Viewing all 16689 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>