I know it is probably a dumb question but I find no results in SO and no relevant results in google (here is the google search link where I searched 'what is jartmp').
I found there are many jartmp files in my folder and don't know why they exist:
-rw-rw-r-- 1 0 Jun 11 14:28 jartmp1089103248955132063.tmp
-rw-rw-r-- 1 54935 Jun 6 03:21 jartmp1258300977464933918.tmp
-rw-rw-r-- 1 118685 Jun 26 22:47 jartmp1388010323455694859.tmp
-rw-rw-r-- 1 15643 May 29 16:45 jartmp1819063406633422416.tmp
-rw-rw-r-- 1 0 Jun 11 16:03 jartmp2142600141373219701.tmp
-rw-rw-r-- 1 197964 Jun 6 03:19 jartmp3480763606864988668.tmp
-rw-rw-r-- 1 126386 Jun 26 22:47 jartmp3533722093029133854.tmp
-rw-rw-r-- 1 7830 Jun 6 03:19 jartmp3713382469327367468.tmp
-rw-rw-r-- 1 55872 Jun 21 15:14 jartmp3950308579438275722.tmp
-rw-rw-r-- 1 39716 Jun 11 16:03 jartmp4311759817318348544.tmp
-rw-rw-r-- 1 0 Jun 11 14:31 jartmp499113526131437419.tmp
I tried to use head *.tmp to see the content of the .tmp files, but it seems that they are all binary files. Can I know why these files are generated and whether it is safe to delete them?
And my java version is 1.6.0_24 for your reference.
java version "1.6.0_24"
Java(TM) SE Runtime Environment (build 1.6.0_24-b07)
Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)
I can't imagine a circumstance where a file with that name would contain valuable data.
But if you are really worried, try using the file command to attempt to identify the files' types ... based on their contents.
How are they generated?
Well the most likely culprit would be some command that creates or unpacks JAR files. Try correlating the timestamps of the files with what you are / were doing at the time. That would help you narrow it down a bit.
This could be result of jar -uvf operation on a jarred file to update a file to jar which is not present in actual path. This is just one of the reason I noticed where it shows this in shell/cmd o/p
: no such file or directory & ends up creating a tmp file (binary) file of pretty huge size.
I started running into this issue with the Java 8 jar.exe when supplying files to update in an archive where one or more of the files didn't exist. For example,
%ProgramFiles%\Java\jdk1.8.0_121\bin\jar.exe uvf0 ..\Desktop.jar Desktop\images\*.gif Desktop\reports\*.rep
where no file that matches the pattern Desktop\reports\*.rep exists, even though gif files in the other path do. In this case, the "jartmp" file remains and the jar file intended to be updated is left untouched. This is a change from Java 7, where jar.exe would happily replace the files it found and ignore the others.
Related
Has anyone successfully got the HTTP/2 connector working in Tomcat9 on AIX (e.g. powerpc-ibm-aix7.2.5.0)?
I followed the instructions here to build the tcnative module (using tomcat-native-1.2.24-src that comes with Tomcat 9.0.37, APR 1.5.2, OpenSSL 1.0.2, IBM Java 1.8.0_261) i.e.
$ ./configure --with-apr=/opt/freeware/bin/apr-1-config --with-java-home=/app/java8_64/ --with-ssl=yes --prefix=/app/tomcat
followed by
make && make install
This creates the expected entries in /app/tomcat/lib, i.e.
-rw-r--r-- 1 usrxxx grpxxxx 3440287 Mar 03 16:47 libtcnative-1.a
-rwxr-xr-x 1 usrxxx grpxxxx 1057 Mar 03 16:47 libtcnative-1.la
lrwxrwxrwx 1 usrxxx grpxxxx 23 Mar 03 16:47 libtcnative-1.so -> libtcnative-1.so.0.2.24
lrwxrwxrwx 1 usrxxx grpxxxx 23 Mar 03 16:47 libtcnative-1.so.0 -> libtcnative-1.so.0.2.24
-rwxr-xr-x 1 usrxxx grpxxxx 1372146 Mar 03 16:47 libtcnative-1.so.0.2.24
but when Tomcat starts I get
04-Mar-2021 15:30:00.752 WARNING [main] org.apache.catalina.core.AprLifecycleListener.init The Apache Tomcat Native library failed to load. The error reported was [tcnative-1 (Not found in java.library.path)]
java.lang.UnsatisfiedLinkError: tcnative-1 (Not found in java.library.path)
at java.lang.ClassLoader.loadLibraryWithPath(ClassLoader.java:1462)
at java.lang.ClassLoader.loadLibraryWithClassLoader(ClassLoader.java:1414)
at java.lang.System.loadLibrary(System.java:584)
at org.apache.tomcat.jni.Library.<init>(Library.java:69)
at org.apache.tomcat.jni.Library.initialize(Library.java:206)
at org.apache.catalina.core.AprLifecycleListener.init(AprLifecycleListener.java:198)
...
04-Mar-2021 15:30:01.096 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.library.path=/app/java8_64/jre/lib/ppc64/compressedrefs:/app/java8_64/jre/lib/ppc64:/app/java8_64/jre/lib/ppc64/j9vm:/app/java8_64/jre/lib/ppc64:/app/java8_64/jre/../lib/ppc64:/app/java8_64/jre/lib/icc:/opt/freeware/lib:/opt/freeware/lib64:/usr/lib:/usr/lib64:/app/tomcat/lib:/usr/lib64:/usr/lib
...
04-Mar-2021 15:30:02.233 SEVERE [main] org.apache.catalina.util.LifecycleBase.handleSubClassException Failed to initialize component [Connector[org.apache.coyote.http11.Http11AprProtocol-8443]] org.apache.catalina.LifecycleException: The configured protocol [org.apache.coyote.http11.Http11AprProtocol] requires the APR/native library which is not available
at org.apache.catalina.connector.Connector.initInternal(Connector.java:1024)
at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:136)
at org.apache.catalina.core.StandardService.initInternal(StandardService.java:533)
at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:136)
at org.apache.catalina.core.StandardServer.initInternal(StandardServer.java:1057)
Edit - based on suggestions from Piotr and Lorinczy:
Tried adding tcnative-1.so as a symlink - same error
Copied libtcnative-1.* to the bin folder. Still failing but new error (progress!?). Possible 32bit v 64bit issue?
09-Mar-2021 10:10:07.116 WARNING [main] org.apache.catalina.core.AprLifecycleListener.init The Apache Tomcat Native library failed to load. The error reported was [/app/apache-tomcat-9.0.37/bin/libtcnative-1.a ( 0509-022 Cannot load module /app/apache-tomcat-9.0.37/bin/libtcnative-1.a.
0509-026 System error: Cannot run a file that does not have a valid format.)]
java.lang.UnsatisfiedLinkError: /app/apache-tomcat-9.0.37/bin/libtcnative-1.a ( 0509-022 Cannot load module /app/apache-tomcat-9.0.37/bin/libtcnative-1.a.
0509-026 System error: Cannot run a file that does not have a valid format.)
at java.lang.ClassLoader.loadLibraryWithPath(ClassLoader.java:1462)
...
All the dependencies seem to be available
$ldd libtcnative-1.so.0.2.24
libtcnative-1.so.0.2.24 needs:
/usr/lib/libssl.so
/usr/lib/libcrypto.so
/opt/freeware/lib/libapr-1.so
/usr/lib/libpthread.a(shr_xpg5.o)
/usr/lib/libc.a(shr.o)
/opt/freeware/lib/libgcc_s.a(shr.o)
/usr/lib/libcrypto.a(libcrypto.so.1.0.2)
/unix
/usr/lib/libpthreads.a(shr_comm.o)
/usr/lib/libcrypt.a(shr.o)
but the dump command seems to support the 32 v 64 bit theory:
$ dump -H -X64 libtcnative-1.so.0.2.24
libtcnative-1.so.0.2.24:
dump: libtcnative-1.so.0.2.24: 0654-108 file is not valid in the current object file mode.
Use the -X option to specify the desired object mode.
$ dump -H -X32 libtcnative-1.so.0.2.24
libtcnative-1.so.0.2.24:
***Loader Section***
Loader Header Information
VERSION# #SYMtableENT #RELOCent LENidSTR
0x00000001 0x00000364 0x00000771 0x00000084
#IMPfilID OFFidSTR LENstrTBL OFFstrTBL
0x00000007 0x0000aacc 0x00006406 0x0000ab50
***Import File Strings***
INDEX PATH BASE MEMBER
0 /opt/freeware/lib:/usr/lib:/lib
1 libssl.so
2 libcrypto.so
3 libapr-1.so
4 libpthread.a shr_xpg5.o
5 libc.a shr.o
6 libgcc_s.a shr.o
It looks like only the 32-bit version of APR is currently available on the server also. I will update once I can get the 64-bit version installed.
Further Updates
The commands I am trying now are:
$ export CFLAGS=-maix64
$ export OBJECT_MODE=64
$ ./configure --with-apr=/opt/freeware/bin/apr-1-config_64 --with-java-home=/app/java8_64/ --with-ssl=/usr/include/openssl --prefix=/app/tomcat
$ make && make install
No errors but same outcome however. This doesn't seem to build a 64 bit version of the tomcat native module (if that is the issue).
I am using STS since a long time and the only feature annoys me is, it creates a new Pivotal tc Server instance in every workspace created and I never use Pivotal tc.
Can anyone tell me how to completely remove Pivotal tc Server from STS installation?
I have tried to update the artifacts.xml file, bundles.info (equinox) and bluntly deleting the features for Pivotal tc but I've never got clean result.
Once you unzipped the distribution it should have a folder with tc-server in it. Simply delete that entire folder and STS will no longer try and create a tc-server install in your workspace.
For example in my installation it looks like this:
$ ls -la sts-bundle/
total 20
drwxrwxr-x 5 kdvolder kdvolder 4096 Oct 12 05:47 .
drwxr-xr-x 23 kdvolder kdvolder 4096 Nov 6 14:43 ..
drwxr-xr-x 2 kdvolder kdvolder 4096 Oct 11 04:03 legal
drwxr-xr-x 11 kdvolder kdvolder 4096 Oct 13 07:56 pivotal-tc-server-developer-3.2.8.RELEASE
drwxr-xr-x 9 kdvolder kdvolder 4096 Nov 3 15:41 sts-3.9.1.RELEASE
So simply delete the folder pivotal-tc-server-developer-3.2.8.RELEASE.
I am currently running a Java Spark Application in tomcat and receiving the following exception:
Caused by: java.io.IOException: Mkdirs failed to create file:/opt/folder/tmp/file.json/_temporary/0/_temporary/attempt_201603031703_0001_m_000000_5
on the line
text.saveAsTextFile("/opt/folder/tmp/file.json") //where text is a JavaRDD<String>
The issue is that /opt/folder/tmp/ already exists and successfully creates up to /opt/folder/tmp/file.json/_temporary/0/ and then it runs into what looks like a permission issue with the remaining part of the path _temporary/attempt_201603031703_0001_m_000000_5 itself, but I gave the tomcat user permissions (chown -R tomcat:tomcat tmp/ and chmod -R 755 tmp/) to the tmp/ directory. Does anyone know what could be happening?
Thanks
Edit for #javadba:
[root#ip tmp]# ls -lrta
total 12
drwxr-xr-x 4 tomcat tomcat 4096 Mar 3 16:44 ..
drwxr-xr-x 3 tomcat tomcat 4096 Mar 7 20:01 file.json
drwxrwxrwx 3 tomcat tomcat 4096 Mar 7 20:01 .
[root#ip tmp]# cd file.json/
[root#ip file.json]# ls -lrta
total 12
drwxr-xr-x 3 tomcat tomcat 4096 Mar 7 20:01 _temporary
drwxrwxrwx 3 tomcat tomcat 4096 Mar 7 20:01 ..
drwxr-xr-x 3 tomcat tomcat 4096 Mar 7 20:01 .
[root#ip file.json]# cd _temporary/
[root#ip _temporary]# ls -lrta
total 12
drwxr-xr-x 2 tomcat tomcat 4096 Mar 7 20:01 0
drwxr-xr-x 3 tomcat tomcat 4096 Mar 7 20:01 ..
drwxr-xr-x 3 tomcat tomcat 4096 Mar 7 20:01 .
[root#ip _temporary]# cd 0/
[root#ip 0]# ls -lrta
total 8
drwxr-xr-x 3 tomcat tomcat 4096 Mar 7 20:01 ..
drwxr-xr-x 2 tomcat tomcat 4096 Mar 7 20:01 .
The exception in catalina.out
Caused by: java.io.IOException: Mkdirs failed to create file:/opt/folder/tmp/file.json/_temporary/0/_temporary/attempt_201603072001_0001_m_000000_5
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:438)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:424)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:799)
at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:123)
at org.apache.spark.SparkHadoopWriter.open(SparkHadoopWriter.scala:91)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1193)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
saveAsTextFile is really processed by Spark executors. Depending on your Spark setup, Spark executors may run as a different user than your Spark application driver. I guess the spark application driver prepares the directory for the job fine, but then the executors running as a different user have no rights to write in that directory.
Changing to 777 won't help, because permissions are not inherited by child dirs, so you'd get 755 anyways.
Try running your Spark application as the same user that runs your Spark.
I suggest to try changing to 777 temporarily . See if it works at that point. There have been bugs/issues wrt permissions on local file system. If that still does not work let us know if anything changed or precisely same result.
I also had the same problem, and my issue has been resolved by using full HDFS path:
Error
Caused by: java.io.IOException: Mkdirs failed to create file:/QA/Gajendra/SparkAutomation/Source/_temporary/0/_temporary/attempt_20180616221100_0002_m_000000_0 (exists=false, cwd=file:/home/gajendra/LiClipse Workspace/SpakAggAutomation)
Solution
Use full HDFS path with hdfs://localhost:54310/<filePath>
hdfs://localhost:54310/QA/Gajendra/SparkAutomation
Could it be selinux/apparmor that plays you a trick? Check with ls -Z and system logs.
So, I've been experiencing the same issue, with my setup there is no HDFS and Spark is running in stand-alone mode. I haven't been able to save spark dataframes to an NFS share using the native Spark methods. The process runs as a local user, and I try to write to the users home folder. Even when creating a subfolder with 777 I cannot write to the folder.
The workaround for this is to convert the dataframe with toPandas() and after that to_csv(). This magically works.
I have the same issue as yours.
I also did not want to write to hdfs but to a local memory share.
After some research, I found that for my case the reason is: there are several nodes executing, however, some of the nodes has no access to the directory where you want to write your data.
Thus the solution is to make the directory available to all nodes, and then it works~
We need to run the application in local mode.
val spark = SparkSession
.builder()
.config("spark.master", "local")
.appName("applicationName")
.getOrCreate()
Giving the full path works for me.
Example:
file:/Users/yourname/Documents/electric-chargepoint-2017-data
this is tricky one, but simple to solve. One must configure job.local.dir variable to point to working directory. Following code works fine with writing CSV file:
def xmlConvert(spark):
etl_time = time.time()
df = spark.read.format('com.databricks.spark.xml').options(rowTag='HistoricalTextData').load(
'/home/zangetsu/proj/prometheus-core/demo/demo-1-iot-predictive-maintainance/dataset/train/')
df = df.withColumn("TimeStamp", df["TimeStamp"].cast("timestamp")).groupBy("TimeStamp").pivot("TagName").sum(
"TagValue").na.fill(0)
df.repartition(1).write.csv(
path="/home/zangetsu/proj/prometheus-core/demo/demo-1-iot-predictive-maintainance/result/",
mode="overwrite",
header=True,
sep=",")
print("Time taken to do xml transformation: --- %s seconds ---" % (time.time() - etl_time))
if __name__ == '__main__':
spark = SparkSession \
.builder \
.appName('XML ETL') \
.master("local[*]") \
.config('job.local.dir', '/home/zangetsu/proj/prometheus-core/demo/demo-1-iot-predictive-maintainance') \
.config('spark.driver.memory','64g') \
.config('spark.debug.maxToStringFields','200') \
.config('spark.jars.packages', 'com.databricks:spark-xml_2.11:0.5.0') \
.getOrCreate()
print('Session created')
try:
xmlConvert(spark)
finally:
spark.stop()
When stopping/starting a deployment via the WebLogic Admin console, sometimes we get the following error:
Unable to access application source information in '/opt/product/oracle/local/managedservers/mydomain/servers/serverA/stage/apputil/apputil.war'
for application 'apputil’.
The specific error is: [Deployer:149158] No application files
exist at '/opt/product/oracle/local/managedservers/mydomain/servers/serverA/stage/apputil/apputil.war'
Yet, if I stop and start the managed server, the deployment appears to come back.
My question is, WHY do these war files disappear from the managed server seemingly randomly, while the server is running? This typically happens after we haven't touched a deployment for some time (6 months). Other war files for other deployments are there. It does not affect the running of the app, until we try to stop/start it.
This is what the filesystem looks like before and after.
[oracle#serverA stage]$ pwd;ls -alstr
/opt/product/oracle/local/managedservers/mydomain/servers/serverA/stage
total 20
4 drwxr-x--- 8 oracle dba 4096 Mar 19 2014 ..
4 drwxr----- 3 oracle dba 4096 Mar 19 2014 app-crypto-util
4 drwxr----- 2 oracle dba 4096 Mar 19 2014 appadmin
4 drwxr----- 2 oracle dba 4096 Mar 19 2014 appsm
4 drwxr----- 5 oracle dba 4096 May 1 15:29 .
[oracle#serverA stage]$ ls
appadmin app-crypto-util appsm
Restart managed server here...
[oracle#serverA stage]$ ls
appdmin app-crypto-util appsm apputil
[oracle#serverA stage]$ ls -alstr
total 24
4 drwxr-x--- 8 oracle dba 4096 Mar 19 2014 ..
4 drwxr----- 3 oracle dba 4096 Mar 19 2014 app-crypto-util
4 drwxr----- 2 oracle dba 4096 Mar 19 2014 appadmin
4 drwxr----- 2 oracle dba 4096 Mar 19 2014 appsm
4 drwxr----- 2 oracle dba 4096 Jun 25 14:35 apputil
4 drwxr----- 6 oracle dba 4096 Jun 25 14:35 .
[oracle#serverA stage]$ ls -alstr apputil/apputil.war
28660 -rw-r----- 1 oracle dba 29347298 Jun 25 14:35 apputil/apputil.war
This may happen when AdminServer & Managed Server are in different machines, or the war is being sent from a different machine to AS.
use arguments: -remote -upload
i.e.:
java weblogic.Deployer -adminurl t3://200.10.10.125:7001 -verbose -username weblogic -password welcome1 -deploy -targets WLCluster -name sample -remote -source sample.war -upload
Referring to the stage directory modification date, it appears this dir is created/edited when an event occur, maybe by a script, and if it is the case, the problem should come from that script when copying apps wars.
So in my POV when stopping a deployment, maybe weblogic stop the apputil managed server, wich delete the war from th stage dir, and when starting a deployment after stopping it, it does not start the specified managed server before, and try to redeploy all apps wich cause the exception.
After setting our domain users to support AES encryption for Kerberos tokens (Windows Server 2008R2), on a web-application server side we get the following exception:
GSSException: Failure unspecified at GSS-API level (Mechanism level:
Encryption type AES256CTS mode with HMAC SHA1-96 is not
supported/enabled)
Strangely we have Java 6 (1.6.0_27) , which means that AES should be supported, according to this document: http://docs.oracle.com/javase/6/docs/technotes/guides/security/jgss/jgss-features.html
Any ideas what's missing in our web-application or Java, or third parties? We are using Spring security Kerberos extension (with minimal code modifications to fit into our current Spring 2.x version and additional authentication requirements).
EDIT (2017-05-06): upcoming JDK versions will have this included. Only a config parameter needs to be set, see JDK-8157561.
Follow this link - Java SE Downloads, scroll down and download the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files for your specific JDK version and follow the process in this tutorial titled: 5.4.2. Kerberos and Unlimited Strength Policy.
The basic steps are as follows:
locate your JDK's security directory (showing Unix below):
$ locate 'jre/lib/security' | grep 'lib/security$'
/usr/java/jdk1.7.0_17/jre/lib/security
/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre/lib/security
/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/lib/security
/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.9.x86_64/jre/lib/security
Noting the above, we need to add the downloaded JCE .jar files to /usr/java/jdk1.7.0_17/jre/lib/security.
The JCE .zip file includes the following (showing JDK 1.7's JCE):
$ ls -l UnlimitedJCEPolicy
total 16
-rw-rw-r-- 1 root root 2500 May 31 2011 local_policy.jar
-rw-r--r-- 1 root root 7289 May 31 2011 README.txt
-rw-rw-r-- 1 root root 2487 May 31 2011 US_export_policy.jar
These are the bundled versions with the JDK (again 1.7):
$ ls -l /usr/java/jdk1.7.0_17/jre/lib/security/*.jar
-rw-r--r--. 1 root root 2865 Mar 1 2013 /usr/java/jdk1.7.0_17/jre/lib/security/local_policy.jar
-rw-r--r--. 1 root root 2397 Mar 1 2013 /usr/java/jdk1.7.0_17/jre/lib/security/US_export_policy.jar
We need to move these out of the way and replace them with the included versions in the JCE .zip file. I typically do the following:
$ pushd /usr/java/jdk1.7.0_17/jre/lib/security/
/usr/java/jdk1.7.0_17/jre/lib/security ~
$ mkdir limited
$ mv *.jar limited/
$ cp ~/UnlimitedJCEPolicy/*.jar .
$ ls -l *.jar
-rw-r--r-- 1 root root 2500 Jun 25 12:50 local_policy.jar
-rw-r--r-- 1 root root 2487 Jun 25 12:50 US_export_policy.jar
Restart anything that's making use of JDK (Tomcat, etc.).