UnresolvedAddressException with MLlib/Spark - java

When I'm running MLlib for files > 1 partition in our cluster I get the following exception:
16/08/14 12:43:23 WARN TaskSetManager: Lost task 2.0 in stage 2.1 (TID
49, da06.qcri.org): FetchFailed(BlockManagerId(3, da08.qcri.org,
33322), shuffleId=0, mapId=5, reduceId=2, message=
org.apache.spark.shuffle.FetchFailedException: Failed to connect to
da08.qcri.org:33322 at
org.apache.spark.storage.ShuffleBlockFetcherIterator.throwFetchFailedException(ShuffleBlockFetcherIterator.scala:323)
at
org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:300)
at
org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:51)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at
scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at
scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32)
at
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
at
org.apache.spark.util.collection.ExternalAppendOnlyMap.insertAll(ExternalAppendOnlyMap.scala:152)
at
org.apache.spark.Aggregator.combineCombinersByKey(Aggregator.scala:58)
at
org.apache.spark.shuffle.BlockStoreShuffleReader.read(BlockStoreShuffleReader.scala:83)
at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:98) at
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) at
org.apache.spark.rdd.RDD.iterator(RDD.scala:270) at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) at
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at
org.apache.spark.scheduler.Task.run(Task.scala:89) at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by:
java.io.IOException: Failed to connect to ***.org:33322 at
org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216)
at
org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167)
at
org.apache.spark.network.netty.NettyBlockTransferService$$anon$1.createAndStart(NettyBlockTransferService.scala:90)
at
org.apache.spark.network.shuffle.RetryingBlockFetcher.fetchAllOutstanding(RetryingBlockFetcher.java:140)
at
org.apache.spark.network.shuffle.RetryingBlockFetcher.access$200(RetryingBlockFetcher.java:43)
at
org.apache.spark.network.shuffle.RetryingBlockFetcher$1.run(RetryingBlockFetcher.java:170)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ... 3
more
Caused by: java.nio.channels.UnresolvedAddressException at
sun.nio.ch.Net.checkAddress(Net.java:123) at
sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:621) at
io.netty.channel.socket.nio.NioSocketChannel.doConnect(NioSocketChannel.java:209)
at
io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:207)
at
io.netty.channel.DefaultChannelPipeline$HeadContext.connect(DefaultChannelPipeline.java:1097)
at
io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:471)
at
io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:456)
at
io.netty.channel.ChannelOutboundHandlerAdapter.connect(ChannelOutboundHandlerAdapter.java:47)
at
io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:471)
at
io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:456)
at
io.netty.channel.ChannelDuplexHandler.connect(ChannelDuplexHandler.java:50)
at
io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:471)
at
io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:456)
at
io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:438)
at
io.netty.channel.DefaultChannelPipeline.connect(DefaultChannelPipeline.java:908)
at io.netty.channel.AbstractChannel.connect(AbstractChannel.java:203)
at io.netty.bootstrap.Bootstrap$2.run(Bootstrap.java:166) at
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
... 1 more
In the slaves configuration file, I have the nodes by IP and not by hostname. In addition, when I ping the machines from the master node using the hostname, it does not seem to have any issue.
Anyone had a similar issue or has some idea on how to solve it?

Related

Spark sqlContext Exception while doing action on Dataframe

I am getting below error when doing count on the parquet file,
java.lang.NoSuchMethodError: org.apache.parquet.schema.Types$MessageTypeBuilder.addFields([Lorg/apache/parquet/schema/Type;)Lorg/apache/parquet/schema/Types$GroupBuilder;
at org.apache.spark.sql.execution.datasources.parquet.CatalystReadSupport$.clipParquetSchema(CatalystReadSupport.scala:117)
Detailed error as below:
ERROR org.apache.spark.util.SparkUncaughtExceptionHandler - Uncaught exception in thread Thread[Executor task launch worker-4,5,main]
java.lang.NoSuchMethodError:
org.apache.parquet.schema.Types$MessageTypeBuilder.addFields([Lorg/apache/parquet/schema/Type;)Lorg/apache/parquet/schema/Types$GroupBuilder;
at
org.apache.spark.sql.execution.datasources.parquet.CatalystReadSupport$.clipParquetSchema(CatalystReadSupport.scala:117)
at org.apache.spark.sql.execution.datasources.parquet.CatalystReadSupport.init(CatalystReadSupport.scala:70)
at org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.initialize(SpecificParquetRecordReaderBase.java:135)
at org.apache.spark.sql.execution.datasources.parquet.UnsafeRowParquetRecordReader.initialize(UnsafeRowParquetRecordReader.java:130)
at org.apache.spark.sql.execution.datasources.parquet.UnsafeRowParquetRecordReader.tryInitialize(UnsafeRowParquetRecordReader.java:117)
at org.apache.spark.rdd.SqlNewHadoopRDD$$anon$1.<init>(SqlNewHadoopRDD.scala:169)
at org.apache.spark.rdd.SqlNewHadoopRDD.compute(SqlNewHadoopRDD.scala:126)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Couldnt figure out the issue. the dataframe has schemas as expected. Can you anyone show me the direction here
Parquet related jars will not be present under lib folder for spark 1.6.1.
Download the jar from here. Unzip the downloaded folder and place the jar inside lib folder present within spark folder i.e $SPARK_HOME/lib.
This should solve your problem.

Spring Web Project failed to started in tomcat server environtment

I have found other stackoverflow questions similar to this one. I've tried to solving may problem from their answered. But it's not worked for my case....so i try to post a question here...
in my cases , I have compiled web project ( WAR ) using spring, zk and java. my workspace project is in my laptop. I'm using IDE Eclipse and Tomcat as a webserver.
before i deploy my war project to environment server , i've always try to deploy and running in may local eclipses and local tomcat.
everything running well in my laptop, but in server environment tomcat failed to started the application.
My Laptop using Win10 and :
**Tomcat**
version 8.5.61
**Java** :
java version "1.8.0_271"
Java(TM) SE Runtime Environment (build 1.8.0_271-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.271-b09, mixed mode)
My Server :
**Tomcat**
Version 8.5.29
**Java**
java version "1.8.0_51"
Java(TM) SE Runtime Environment (build 1.8.0_51-b16)
Java HotSpot(TM) 64-Bit Server VM (build 25.51-b03, mixed mode)
**OS**
Distributor ID: SUSE LINUX
Description: SUSE Linux Enterprise Server 11 (x86_64)
Release: 11
Codename: n/a
This is the error when deployed in server environment
2021-08-05 13:03:08.116 ERROR 18908 --- [ost-startStop-1] o.s.boot.SpringApplication : Application run failed
java.lang.IllegalStateException: Logback configuration error detected:
ERROR in ch.qos.logback.core.rolling.RollingFileAppender[FILE] - openFile(logs/spring.log,true) call failed. java.io.FileNotFoundException: logs/spring.log (Permission denied)
at org.springframework.boot.logging.logback.LogbackLoggingSystem.loadConfiguration(LogbackLoggingSystem.java:169)
at org.springframework.boot.logging.AbstractLoggingSystem.initializeWithConventions(AbstractLoggingSystem.java:82)
at org.springframework.boot.logging.AbstractLoggingSystem.initialize(AbstractLoggingSystem.java:60)
at org.springframework.boot.logging.logback.LogbackLoggingSystem.initialize(LogbackLoggingSystem.java:117)
at org.springframework.boot.context.logging.LoggingApplicationListener.initializeSystem(LoggingApplicationListener.java:290)
at org.springframework.boot.context.logging.LoggingApplicationListener.initialize(LoggingApplicationListener.java:263)
at org.springframework.boot.context.logging.LoggingApplicationListener.onApplicationEnvironmentPreparedEvent(LoggingApplicationListener.java:226)
at org.springframework.boot.context.logging.LoggingApplicationListener.onApplicationEvent(LoggingApplicationListener.java:199)
at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:172)
at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:165)
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:139)
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:127)
at org.springframework.boot.context.event.EventPublishingRunListener.environmentPrepared(EventPublishingRunListener.java:75)
at org.springframework.boot.SpringApplicationRunListeners.environmentPrepared(SpringApplicationRunListeners.java:54)
at org.springframework.boot.SpringApplication.prepareEnvironment(SpringApplication.java:347)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:306)
at org.springframework.boot.web.servlet.support.SpringBootServletInitializer.run(SpringBootServletInitializer.java:157)
at org.springframework.boot.web.servlet.support.SpringBootServletInitializer.createRootApplicationContext(SpringBootServletInitializer.java:137)
at org.springframework.boot.web.servlet.support.SpringBootServletInitializer.onStartup(SpringBootServletInitializer.java:91)
at org.springframework.web.SpringServletContainerInitializer.onStartup(SpringServletContainerInitializer.java:171)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5204)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:754)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:730)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:986)
at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1857)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
05-Aug-2021 13:03:08.123 SEVERE [localhost-startStop-1] org.apache.catalina.core.ContainerBase.addChildInternal ContainerBase.addChild: start:
org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/my-aceh]]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:167)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:754)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:730)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:986)
at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1857)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: Logback configuration error detected:
ERROR in ch.qos.logback.core.rolling.RollingFileAppender[FILE] - openFile(logs/spring.log,true) call failed. java.io.FileNotFoundException: logs/spring.log (Permission denied)
at org.springframework.boot.logging.logback.LogbackLoggingSystem.loadConfiguration(LogbackLoggingSystem.java:169)
at org.springframework.boot.logging.AbstractLoggingSystem.initializeWithConventions(AbstractLoggingSystem.java:82)
at org.springframework.boot.logging.AbstractLoggingSystem.initialize(AbstractLoggingSystem.java:60)
at org.springframework.boot.logging.logback.LogbackLoggingSystem.initialize(LogbackLoggingSystem.java:117)
at org.springframework.boot.context.logging.LoggingApplicationListener.initializeSystem(LoggingApplicationListener.java:290)
at org.springframework.boot.context.logging.LoggingApplicationListener.initialize(LoggingApplicationListener.java:263)
at org.springframework.boot.context.logging.LoggingApplicationListener.onApplicationEnvironmentPreparedEvent(LoggingApplicationListener.java:226)
at org.springframework.boot.context.logging.LoggingApplicationListener.onApplicationEvent(LoggingApplicationListener.java:199)
at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:172)
at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:165)
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:139)
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:127)
at org.springframework.boot.context.event.EventPublishingRunListener.environmentPrepared(EventPublishingRunListener.java:75)
at org.springframework.boot.SpringApplicationRunListeners.environmentPrepared(SpringApplicationRunListeners.java:54)
at org.springframework.boot.SpringApplication.prepareEnvironment(SpringApplication.java:347)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:306)
at org.springframework.boot.web.servlet.support.SpringBootServletInitializer.run(SpringBootServletInitializer.java:157)
at org.springframework.boot.web.servlet.support.SpringBootServletInitializer.createRootApplicationContext(SpringBootServletInitializer.java:137)
at org.springframework.boot.web.servlet.support.SpringBootServletInitializer.onStartup(SpringBootServletInitializer.java:91)
at org.springframework.web.SpringServletContainerInitializer.onStartup(SpringServletContainerInitializer.java:171)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5204)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
... 10 more
05-Aug-2021 13:03:08.125 SEVERE [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployWAR Error deploying web application archive [/opt/apache-tomcat_61010/webapps/my-aceh.war]
java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/my-aceh]]
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:758)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:730)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:986)
at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1857)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
05-Aug-2021 13:03:08.126 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployWAR Deployment of web application archive [/opt/apache-tomcat_61010/webapps/my-aceh.war] has finished in [7,127] ms
UPDATE 06/08/2021
after setting permission on spring.log , the message error failed to load component still come out.
06-Aug-2021 22:33:41.112 SEVERE [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployWAR Error deploying web application archive [/opt/apache-tomcat_61010/webapps/my-aceh.war]
java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/my-aceh]]
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:758)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:730)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:986)
at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1857)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
06-Aug-2021 22:33:41.113 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployWAR Deployment of web application archive [/opt/apache-tomcat_61010/webapps/my-aceh.war] has finished in [7,127] ms
thank u for your support...
As you can see in your logs files, Spring can not create the file logs/spring.log (relative to the server's working directory). Unless your system administrator changed the usual permissions on a Tomcat installation:
$CATALINA_BASE/logs is writable by Tomcat, but
there is no guarantee that the OS working directory of the server is $CATALINA_BASE, although often it is.
To be on the safe side and don't depend on Tomcat's working directory, use the Java system property catalina.base to define the path of the log file:
logging.file.name=${catalina.base:.}/logs/spring.log
The fallback . is used when you start your application in an embedded Tomcat.
Remark: Since you use the default spring.log file name, you can also delete the logging.file.name property an configure just a path:
logging.file.path=${catalina.base:.}/logs
From the stacktrace it seems like tomcat is trying to create a file under logs/spring.log in the tomcat directory which the using running tomcat not has access to. See (Permission denied) in the stacktrace.
You could create this file and give write access for the user running tomcat.
Some reasons may happen:
Check if you have permission to read the spring.log file or not this is the most obvious reason for java.io.FileNotFoundException: (Access is denied) exception so immediately provide read, write and execute permission to the log file and try again.
Or check if some other process is using that file This could be another possible reason for java.io.FileNotFoundException: (Access is denied).

How to solve exception [Error loading application.groovy due to [java.lang.ArrayIndexOutOfBoundsException]: 0] when deploying war?

How to solve exception occurred on war deployment to production mode
Development in Grails 3.1.8, Java 8
Deployment on Linux CentOS 7 and Tomcat 8.5.9 server.
Please find the error log from catalina.out
18-Mar-2018 20:29:21.953 SEVERE [localhost-startStop-1] org.apache.catalina.core.ContainerBase.addChildInternal ContainerBase.addChild: start:
org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/application]]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:153)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:725)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:701)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:717)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.grails.core.exceptions.GrailsConfigurationException: Error loading application.groovy due to [java.lang.ArrayIndexOutOfBoundsException]: 0
at org.grails.core.cfg.GroovyConfigPropertySourceLoader.load(GroovyConfigPropertySourceLoader.groovy:72)
at org.grails.core.cfg.GroovyConfigPropertySourceLoader.load(GroovyConfigPropertySourceLoader.groovy:45)
at org.springframework.boot.env.PropertySourcesLoader.load(PropertySourcesLoader.java:127)
at org.springframework.boot.context.config.ConfigFileApplicationListener$Loader.loadIntoGroup(ConfigFileApplicationListener.java:462)
at org.springframework.boot.context.config.ConfigFileApplicationListener$Loader.load(ConfigFileApplicationListener.java:449)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:307)
at grails.boot.GrailsApp.run(GrailsApp.groovy:55)
at org.springframework.boot.context.web.SpringBootServletInitializer.run(SpringBootServletInitializer.java:149)
at org.grails.boot.context.web.GrailsAppServletInitializer.createRootApplicationContext(GrailsAppServletInitializer.groovy:57)
at org.springframework.boot.context.web.SpringBootServletInitializer.onStartup(SpringBootServletInitializer.java:85)
at org.springframework.web.SpringServletContainerInitializer.onStartup(SpringServletContainerInitializer.java:169)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5292)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147)
... 10 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
at sun.font.CompositeFont.getSlotFont(CompositeFont.java:333)
at sun.font.CompositeGlyphMapper.initMapper(CompositeGlyphMapper.java:81)
at sun.font.CompositeGlyphMapper.<init>(CompositeGlyphMapper.java:62)
at sun.font.CompositeFont.getMapper(CompositeFont.java:391)
at sun.font.CompositeFont.canDisplay(CompositeFont.java:417)
at java.awt.Font.canDisplay(Font.java:1980)
at com.octo.captcha.component.image.fontgenerator.RandomFontGenerator.checkFontCanDisplayCharacters(RandomFontGenerator.java:207)
at com.octo.captcha.component.image.fontgenerator.RandomFontGenerator.cleanFontList(RandomFontGenerator.java:163)
at com.octo.captcha.component.image.fontgenerator.RandomFontGenerator.initializeFonts(RandomFontGenerator.java:79)
at com.octo.captcha.component.image.fontgenerator.RandomFontGenerator.<init>(RandomFontGenerator.java:69)
at org.grails.core.cfg.GroovyConfigPropertySourceLoader.load(GroovyConfigPropertySourceLoader.groovy:61)
... 34 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
at sun.font.CompositeFont.getSlotFont(CompositeFont.java:333)
This looks like some fonts are missing, or maybe none is installed. Compare the output of fc-command between your dev environment and your prod. This command might not be here by default and you need to install it too.
This post might help, as we don't know which distro you are running.

Apache Nifi: PutHiveStreaming is not connecting

I have a simple process flow that follows the example: https://community.hortonworks.com/articles/52856/stream-data-into-hive-like-a-king-using-nifi.html
The flow looks like this:
The flow files go through the entire process but then fails to write to the Hive DB in the processor "Stream CSV to Hive."
When I look at the nifi-app.log, I get the following exception:
2018-03-03 00:51:33,942 ERROR [Timer-Driven Process Thread-8]
o.a.n.processors.hive.PutHiveStreaming PutHiveStreaming[id=e88d5c4e-0161-
1000-1713-79d402d400b2] Error connecting to Hive endpoint: table olympics
at thrift://master:9083
2018-03-03 00:51:33,947 ERROR [Timer-Driven Process Thread-8]
o.a.n.processors.hive.PutHiveStreaming PutHiveStreaming[id=e88d5c4e-0161-
1000-1713-79d402d400b2] Hive Streaming co nnect/write error, flow file will
be penalized and routed to retry.
org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed connecting to
EndPoint {metaStoreUri='thrift://m aster:9083', database='default',
table='olympics', partitionVals=[] }:
org.apache.nifi.processors.hive.PutHiveStreaming$ShouldRetryException: Hive
Streaming connect/write error , flow file will be penalized and routed to
retry. org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed
connecting to EndPoint {metaStoreUri='thrift://master:9083', data
base='default', table='olympics', partitionVals=[] }
org.apache.nifi.processors.hive.PutHiveStreaming$ShouldRetryException: Hive
Streaming connect/write error, flow file will be penalized and routed to
retry. org.apache.nifi.util .hive.HiveWriter$ConnectFailure: Failed
connecting to EndPoint {metaStoreUri='thrift://master:9083',
database='default', table='olympics', partitionVals=[] }
at
org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onHiveRecordsError$1(Put
HiveStreaming.java:527)
at org.apache.nifi.processor.util.pattern.ExceptionHandler$OnError.lambda$andThen$0(ExceptionHandler.java:54)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onHiveRecordError$2(PutHiveStreaming.java:545)
at org.apache.nifi.processor.util.pattern.ExceptionHandler.execute(ExceptionHandler.java:148)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$12(PutHiveStreaming.java:677)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2174)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2144)
at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:631)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$4(PutHiveStreaming.java:555)
at org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:114)
at org.apache.nifi.processor.util.pattern.RollbackOnFailure.onTrigger(RollbackOnFailure.java:184)
at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:555)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1119)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed connecting to EndPoint {metaStoreUri='thrift://master:9083', database='default', table='olympics', partit ionVals=[] }
at org.apache.nifi.util.hive.HiveWriter.<init>(HiveWriter.java:79)
at org.apache.nifi.util.hive.HiveUtils.makeHiveWriter(HiveUtils.java:46)
at org.apache.nifi.processors.hive.PutHiveStreaming.makeHiveWriter(PutHiveStreaming.java:968)
at org.apache.nifi.processors.hive.PutHiveStreaming.getOrCreateWriter(PutHiveStreaming.java:879)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$null$8(PutHiveStreaming.java:680)
at org.apache.nifi.processor.util.pattern.ExceptionHandler.execute(ExceptionHandler.java:127)
... 19 common frames omitted
Caused by: org.apache.nifi.util.hive.HiveWriter$TxnBatchFailure: Failed acquiring Transaction Batch from EndPoint: {metaStoreUri='thrift://master:9083', database='default', tab le='olympics', partitionVals=[] }
at org.apache.nifi.util.hive.HiveWriter.nextTxnBatch(HiveWriter.java:264)
at org.apache.nifi.util.hive.HiveWriter.<init>(HiveWriter.java:73)
... 24 common frames omitted
Caused by: org.apache.hive.hcatalog.streaming.TransactionError: Unable to acquire lock on {metaStoreUri='thrift://master:9083', database='default', table='olympics', partitionV als=[] }
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:578)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransaction(HiveEndPoint.java:547)
at org.apache.nifi.util.hive.HiveWriter.nextTxnBatch(HiveWriter.java:261)
... 25 common frames omitted
Caused by: org.apache.thrift.transport.TTransportException: null
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_lock(ThriftHiveMetastore.java:3906)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.lock(ThriftHiveMetastore.java:3893)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.lock(HiveMetaStoreClient.java:1863)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:152)
at com.sun.proxy.$Proxy85.lock(Unknown Source)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:573)
... 27 common frames omitted
I have the Hive metastore daemon running and the default port, 9083 is open and I can connect to it with telnet. Hive is running on the same machine as Nifi and has an entry in /etc/hosts to name it master.
I'm getting the following exception:
org.apache.thrift.transport.TTransportException: null.
Which is getting passed a null to the parent, Exception class's constructor.
I've also tried using older versions of Apache NiFi and Apache Hive. I just tried Nifi version 1.4 and Hive version 1.2.2. I'm now getting a different error:
2018-03-13 19:17:20,126 INFO [put-hive-streaming-0] hive.metastore Trying to connect to metastore with URI thrift://master:9083
2018-03-13 19:17:20,220 INFO [put-hive-streaming-0] hive.metastore Connected to metastore.
2018-03-13 19:17:20,524 INFO [Timer-Driven Process Thread-6] hive.metastore Trying to connect to metastore with URI thrift://master:9083
2018-03-13 19:17:20,525 INFO [Timer-Driven Process Thread-6] hive.metastore Connected to metastore.
2018-03-13 19:17:21,204 WARN [put-hive-streaming-0] o.a.h.h.m.RetryingMetaStoreClient MetaStoreClient lost connection. Attempting to reconnect.
org.apache.thrift.TApplicationException: Internal error processing open_txns
at org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_open_txns(ThriftHiveMetastore.java:3834)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.open_txns(ThriftHiveMetastore.java:3821)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.openTxns(HiveMetaStoreClient.java:1841)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:152)
at com.sun.proxy.$Proxy122.openTxns(Unknown Source)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.openTxnImpl(HiveEndPoint.java:520)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:504)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:461)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatchImpl(HiveEndPoint.java:345)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatch(HiveEndPoint.java:325)
at org.apache.nifi.util.hive.HiveWriter.lambda$nextTxnBatch$2(HiveWriter.java:259)
at org.apache.nifi.util.hive.HiveWriter.lambda$callWithTimeout$4(HiveWriter.java:365)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-03-13 19:17:22,205 INFO [put-hive-streaming-0] hive.metastore Trying to connect to metastore with URI thrift://master:9083
2018-03-13 19:17:22,207 INFO [put-hive-streaming-0] hive.metastore Connected to metastore.
2018-03-13 19:17:22,214 ERROR [Timer-Driven Process Thread-6] o.a.n.processors.hive.PutHiveStreaming PutHiveStreaming[id=e88d5c4e-0161-1000-1713-79d402d400b2] Failed to create HiveWriter for endpoint: {metaStoreUri='thrift://master:9083', database='default', table='olympics', partitionVals=[] }: org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed connecting to EndPoint {metaStoreUri='thrift://master:9083', database='default', table='olympics', partitionVals=[] }
org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed connecting to EndPoint {metaStoreUri='thrift://master:9083', database='default', table='olympics', partitionVals=[] }
at org.apache.nifi.util.hive.HiveWriter.<init>(HiveWriter.java:79)
at org.apache.nifi.util.hive.HiveUtils.makeHiveWriter(HiveUtils.java:46)
at org.apache.nifi.processors.hive.PutHiveStreaming.makeHiveWriter(PutHiveStreaming.java:968)
at org.apache.nifi.processors.hive.PutHiveStreaming.getOrCreateWriter(PutHiveStreaming.java:879)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$null$8(PutHiveStreaming.java:680)
at org.apache.nifi.processor.util.pattern.ExceptionHandler.execute(ExceptionHandler.java:127)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$12(PutHiveStreaming.java:677)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2174)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2144)
at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:631)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$4(PutHiveStreaming.java:555)
at org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:114)
at org.apache.nifi.processor.util.pattern.RollbackOnFailure.onTrigger(RollbackOnFailure.java:184)
at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:555)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1119)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.nifi.util.hive.HiveWriter$TxnBatchFailure: Failed acquiring Transaction Batch from EndPoint: {metaStoreUri='thrift://master:9083', database='default', table='olympics', partitionVals=[] }
at org.apache.nifi.util.hive.HiveWriter.nextTxnBatch(HiveWriter.java:264)
at org.apache.nifi.util.hive.HiveWriter.<init>(HiveWriter.java:73)
... 24 common frames omitted
Caused by: org.apache.hive.hcatalog.streaming.TransactionBatchUnAvailable: Unable to acquire transaction batch on end point: {metaStoreUri='thrift://master:9083', database='default', table='olympics', partitionVals=[] }
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:511)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:461)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatchImpl(HiveEndPoint.java:345)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatch(HiveEndPoint.java:325)
at org.apache.nifi.util.hive.HiveWriter.lambda$nextTxnBatch$2(HiveWriter.java:259)
at org.apache.nifi.util.hive.HiveWriter.lambda$callWithTimeout$4(HiveWriter.java:365)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
... 3 common frames omitted
Caused by: org.apache.thrift.TApplicationException: Internal error processing open_txns
at org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_open_txns(ThriftHiveMetastore.java:3834)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.open_txns(ThriftHiveMetastore.java:3821)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.openTxns(HiveMetaStoreClient.java:1841)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:152)
at com.sun.proxy.$Proxy122.openTxns(Unknown Source)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.openTxnImpl(HiveEndPoint.java:520)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:504)
... 9 common frames omitted
What version of NiFi/HDF are you using and what kind/version of Hadoop cluster are you connecting to (HDP, e.g.)? It is a known issue that Apache NiFi will not work with HDP Hive starting with HDP 2.5 (or maybe 2.4), you would have to download the NiFi-only distro of Hortonworks Data Flow in order to make it work.

PHP/JAVA bridge Tomcat 7 in Solaris 11.2 sparc

I have installed PHP 7.0.9 and copied JavaBridge.war file to tomcat7/webapps folder and .jar files on tomcat7/lib folder. Then I restarted the Tomcat server. I found this error:
Fatal Error: Failed to start PHP [php-cgi, -v], reason: java.io.IOException: Cannot run program "php-cgi" (in directory "/export/home/tomcat"): error=2, No such file or directory
Could not start FCGI server: java.io.IOException: PHP not found. Please install php-cgi. PHP test command was: [php-cgi, -v]
Timeout waiting for PHP FastCGI daemon
php.java.bridge.http.FCGIConnectException: Could not connect to server
at php.java.bridge.http.SocketChannelFactory.test(SocketChannelFactory.java:58)
at php.java.bridge.http.FCGIConnectionPool.<init>(FCGIConnectionPool.java:175)
at php.java.bridge.http.FCGIConnectionPool.<init>(FCGIConnectionPool.java:189)
at php.java.servlet.ContextLoaderListener.createConnectionPool(ContextLoaderListener.java:541)
at php.java.servlet.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:185)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4939)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5434)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:633)
at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1113)
at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1671)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.io.IOException: java.net.ConnectException: Connection refused
at php.java.bridge.http.FCGIConnectException.<init>(FCGIConnectException.java:37)
... 18 more
Caused by: java.io.IOException: PHP not found. Please install php-cgi. PHP test command was: [php-cgi, -v]
at php.java.bridge.Util$Process.start(Util.java:1145)
at php.java.servlet.fastcgi.FCGIProcess.start(FCGIProcess.java:68)
at php.java.bridge.http.SocketChannelFactory.doBind(SocketChannelFactory.java:125)
at php.java.bridge.http.FCGIConnectionFactory.runFcgi(FCGIConnectionFactory.java:88)
at php.java.bridge.http.FCGIConnectionFactory$1.run(FCGIConnectionFactory.java:10
9)
I am not able to run my PHP.

Categories

Resources