I have a problem with HSQLDB V2.3 on Windows. I can't connect with new databases using the HSQLDB Server.
Is there a log or debug option for the server so I can check the properties loaded and file paths, etc?
Is my properties file OK? I wasn't sure how to formulate file paths for windows.
Can I use quotes on file path names?
Is the connection string I'm using for the tmp db correct?
What's the correct syntax to use the --props server argument?
--props path
--props path/filename
I have set-up two environment variables (too keep it simple*). These variables don't have any effect except to save my typing. Initially I was loading the server from the HSQLDB folder directly.
HSQLDB_HOME ... home folder for the current HSQLDB
HSQLDB_DATA ... folder for data repository
I am following the the steps from:
Running and Using HSQLDB
Every time I connect via the server it makes a database called, "test" instead of letting me connect to either of the two databases specified in the server.properties.
%HSQLDB_DATA%/
test.log
test.properties
test.script
test.tmp/ .......... (folder)
test.lck
I made a 'server.properties' file in:
%HSQLDB_HOME%/lib/
where the HSQLDB JAR file is. I want two databases: tmp and dev:
# -- tmp
server.database.0=file:hsqldb/tmp_db/tmp
server.dbname.0=tmp_db
#
####
#
# -- dev
server.database.1=file:r:/.data/hsqldb/dev_db/dev
server.dbname.1=dev_db
I expected that the properties file to be enough to set-up two databases. When I run the hsqldb manager I don't get a connection for tmp say:
"jdbc:hsqldb:hsql://localhost/tmp"
user: SA, password: ""
I get a pop-up error:
database alias does not exist (Manager)
[Thread[HSQLDB Connection #26827674,5,HSQLDB Connections #372f7a8d]]: database alias=tmp does not exist (Server)
I created these two manually using the cmd-line, e.g. named "tmp":
%JAVA_HOME%\bin\java.exe -classpath %HSQLDB_HOME%\lib\hsqldb.jar org.hsqldb.server.Server org.hsqldb.server.Server --database.1 file:r:/.data/hsqldb/tmp_db --dbname.1 tmp_db
And could connect and create tmp:
%HSQLDB_DATA%/tmp_db/
tmp.log
tmp.properties
tmp.script
tmp.tmp/ .......... (folder)
tmp.lck
as forecast in the documentation. When I start-up the HSQLDB Server with the aforesaid 'server.properties' file or specifying properties explicitly:
%JAVA_HOME%\bin\java.exe -classpath %HSQLDB_HOME%\lib\hsqldb.jar org.hsqldb.server.Server --trace true --props %HSQLDB_DATA%
The server will only let me connect with a database called "test" as described at the beginning of the question.
Because the properties file looks good and the in-process file connection string works,
jdbc:hsqldb:hsqldb/tmp_db/tmp
I am left considering that the server.properties file is in the wrong place or not loading for some reason. It would be wonderful if there's a way for the server to dump the properties file at start-time :-) Thanks in advance for your suggestions ...
I have found the problem. Firstly, thanks to this tutorial:
HSQLDB Installation
After reviewing this I realised my error.
The server.properties file must be in the current folder when the server script runs. I had read that on the Running and Using HSQLDB manual page but misinterpreted its meaning and I put the properties file in my %HSQLDB_HOME%/lib folder. Oops.
When you look at the BAT script, it actually changed the current folder to be the %HSQLDB_HOME%/data folder ...
cd ..\data
So the default location for your server.properties file should be your: %HSQLDB_HOME%/data if you want to work with the default runServer.bat script.
For those wanting to separate data from the server software. I made an improvement for the default script using the two environment variables as follows.
HSQLDB_HOME ... home folder for the current HSQLDB
HSQLDB_DATA ... folder for data repository
runServer.bat:
#cd /d %HSQLDB_DATA%
#cd
#echo.
#rem __ #pause
%JAVA_HOME%\bin\java -classpath %HSQLDB_HOME%\lib\hsqldb.jar org.hsqldb.server.Server %1 %2 %3 %4 %5 %6 %7 %8 %9
#echo.
#pause
Which now expects my server.properties file in the %HSQL_DATA% folder. And that works. Also for my server since it is for development /testing, I'm using the --trace=true option. Like a lot of these things, now I get-it, it all makes perfect sense. Hopefully my misunderstanding will assist others who haven't found a simple tutorial before resorting to stackoverflow.
Related
Getting the below response while running jmeter.server Batch file:
Could not find ApacheJmeter_core.jar ...
what to do now? I am using Java version-14
Could not find ApacheJmeter_core.jar ... meaning you haven't started Jmeter master.
But above error is different, in this case Jmeter slave cannot find rmi keystore file at bin folder thus throwing file not found exception for rmi_keystore.jks.
You have to generate rmi_keystore.jks by running create-rmi-keystore.bat for windows (for mac you have to run create-rmi-keystore.sh) then once .jks file is generated then copy it in bin folder of each Jmeter slave.
For more details, you can refer link: https://jmeter.apache.org/usermanual/remote-test.html#setup_ssl
I installed a HDP 2.5 Hadoop/Spark cluster using cloudbreak on Azure.
Everything works except the spark history server. In the log it says the default uri for the event log hdfs:///spark-history is false, the hostname is missing.
So I replaced it with a direct reference to the actual location on the azure blob storage: wasb://<host>:<port>/spark-history. This uri works when used with hdsf dfs -ls, but still the spark history server won't start. Now it complains about a class not found: Caused by: java.lang.NoClassDefFoundError: com/microsoft/azure/storage/blob/BlobListingDetails.
So, it seems it doesn't load some driver during start. I did find /usr/hdp/current/hadoop-client/lib/azure-storage-2.2.0.jar, that might be it. But I'm not sure how to make the history server load the jar during startup using the ambari config editor or whether this is even the right solution to the original problem.
The strangest thing is that Azure HDInsight uses blob storage and there the spark history server simply runs using the default hdfs:///spark-history setting.
Any suggestions on how to load the azure-storage driver or any other approach to this problem?
Thanx
I'll answer my own question. Someone on the hortonworks community forum had the answer: the spark assembly jar contains invalid storage jars. Updating the assembly jar solves the issue:
mkdir -p /tmp/jarupdate && cd /tmp/jarupdate
find /usr/hdp/ -name "azure-storage*.jar"
cp /usr/hdp/2.5.0.1-210/hadoop/lib/azure-storage-2.2.0.jar .
cp /usr/hdp/current/spark-historyserver/lib/spark-assembly-1.6.3.2.5.0.1-210-hadoop2.7.3.2.5.0.1-210.jar .
unzip azure-storage-2.2.0.jar
jar uf spark-assembly-1.6.3.2.5.0.1-210-hadoop2.7.3.2.5.0.1-210.jar com/
mv -f spark-assembly-1.6.3.2.5.0.1-210-hadoop2.7.3.2.5.0.1-210.jar /usr/hdp/current/spark-historyserver/lib/spark-assembly-1.6.3.2.5.0.1-210-hadoop2.7.3.2.5.0.1-210.jar
cd .. && rm -rf /tmp/jarupdate
I have created database with my own program and it appeared as mydatabase.mv.db file.
But when I tried to access the same database with DbVisualizer, with apparently same parameters, it created two files mydatabase.lock.db and celebrity.h2.db and didn't see tables, created in the program.
What was the incompatibility?
UPDATE
both setups are follows:
In H2 version 1.3.x, the database file <databaseName>.h2.db is the default. (The storage engine "PageStore" is used).
In H2 version 1.4.x, the database file <databaseName>.mv.dbis the default. (The storage engine "MVStore" is used). The MVStore is still beta right now (November 2014). But you can disable the MVStore by appending ;mv_store=false to the database URL.
The accepted answer is now several years old and since others may be looking for a more "current" solution...
To get it to work just update the H2 JDBC driver that DBVizualizer uses. Basically download the "Platform-Independent Zip" from http://www.h2database.com/html/download.html and copy the h2/bin/h2-X.X.X.jar file to ~/.dbvis/jdbc/ and then restart DBVizualizer so it can pick up the updated driver.
Also, make sure you remove .mv.db from the file name when setting the Database file name in DBVizualizer.
For Windows Users:
The excellent way to read a *.db.mv file would be locally installing the h2 database and then running that database locally with the java command.
Then your path to the file will definitely show the data from your table until and unless any errors occur.
You can download the h2 database form:
http://www.h2database.com/html/download-archive.html
Note: choose the database version for H2 which supports your file.
You can install the H2 database by installing the downloaded .exe file would be around 7 MB.
then in the bin directory of H2 open a command prompt and run the command
java -jar in my case it is
command: java -jar h2-1.4.200.jar
It will show the console of the H2 database on the browser
Provide the database details:
Driver Class: org.h2.Driver JDBC
URL: jdbc:h2:~/h2 "file path"
User Name: "blank by default"
Password: "blank by default"
Refer SS below
enter image description here
I'm trying to run Jetty on CentOS and am having problems as I am getting unexpected results when I try to set the full path for JETTY_LOGS. The system tries to take that path and append it to the path I specified in the JETTY_HOME variable.
JETTY_HOME=/usr/local/jetty/jetty-9.1.4
JETTY_USER=jetty
JETTY_PORT=8085
JETTY_HOST=0.0.0.0
JETTY_LOGS=/usr/local/jetty/jetty-9.1.4/logs
Any ideas as to what I'm doing wrong?
The error I get is:
Starting Jetty: java.io.IOException: Cannot write start.log to directory
/usr/local/jetty/jetty-9.1.4/usr/local/jetty/jetty-9.1.4/logs [directory doesn't exist or is read-only]
java.io.IOException: Cannot write start.log to directory /usr/local/jetty/jetty-9.1.4/usr/local/jetty/jetty-9.1.4/logs [directory doesn't exist or is read-only]
at org.eclipse.jetty.start.StartLog.initLogFile(StartLog.java:127)
at org.eclipse.jetty.start.StartLog.initialize(StartLog.java:113)
at org.eclipse.jetty.start.Main.processCommandLine(Main.java:520)
at org.eclipse.jetty.start.Main.main(Main.java:102)
It seems that JETTY_LOG directory is relative to JETTY_HOME.
Could you try to set:
JETTY_LOGS=/logs
or alternatively
JETTY_HOME=/usr/local/jetty/jetty-9.1.4/
JETTY_LOGS=logs
You're not doing anything wrong, this is new behavior in jetty 9.1.4:
https://bugs.eclipse.org/bugs/show_bug.cgi?id=432192
Workaround is to set the JETTY_LOGS env var relative to your jetty base dir (like the logs dir that comes in the standard jetty tarball):
JETTY_LOGS=logs
Which will resolve to /usr/local/jetty/jetty-9.1.4/logs in your case (the jetty base dir defaults to the jetty home dir).
If you want the logs written somewhere outside of your jetty base dir, the best way is to use the above JETTY_LOGS=logs env setting, and just symlink the dir elsewhere; like this to create and link to the common /var/log/jetty dir:
# mv /usr/local/jetty/jetty-9.1.4/logs /var/log/jetty
# ln -s /var/log/jetty /usr/local/jetty/jetty-9.1.4/logs
Whatever you do, also make sure that the user as which you run jetty has write access to the logs dir; if you use the jetty user in the jetty group, make it the owner of your logs dir:
# chown -R jetty:jetty /var/log/jetty
I'am using debian distribution.I write a code in windows but I have no error and I create a database.Despite I prepare libraries in Debian, my database is not created and data is not added and in java program there was no error.
My Database Path.
dbPath=/var/lib/neo4j/data/graph.db
I guess error occurs about database proporties.
I have 2 different proporties so I don't know how can I set this settings.
-etc/neo4j
-/var/lib/neo4j/conf
You should have the /etc/neo4j/neo4j-server.properties file that typically begins like
################################################################
# Neo4j configuration
#
################################################################
#***************************************************************
# Server configuration
#***************************************************************
# location of the database directory
org.neo4j.server.database.location=data/graph.db
...
...
Where database path is relative.
If you want to have an absolute path, you should have this line:
org.neo4j.server.database.location=/var/lib/neo4j/data/graph.db