when I run command to start elasticsearch ..
/elasticsearch -f
gives bunch of errors like
ElasticSearchIllegalStateException[Failed to obtain node lock, is the following location writable?: [/home/anish/elasticsearch/data/elasticsearch]]
IOException[failed to obtain lock on /home/anish/elasticsearch/data/elasticsearch/nodes/49]
IOException[Cannot create directory: /home/anish/elasticsearch/data/elasticsearch/nodes/49]
dont know how to get rid of it..pl help
Look in /etc/sysconfig/elasticsearch for the values of ES_USER and ES_GROUP -- the default setting for both is elasticsearch. If that user cannot write to your data directory, then you will see this error. Since the data directory appears to be under your home directory, this could be the issue. You can either modify the settings for ES_USER and ES_GROUP or change the permissions for the data directory.
There could be multiple causes of this error, Elasticsearch stores its data in the file system and location of that is specified in elasticsearch.yml and while starting elasticsearch, it checks whether it can write data to that location or not, if yes, then it would acquire a lock on that location so that other process can't write to that location which prevents the data loss and corruption.
Also, this depends on the version of Elasticsearch, in current major version 7.X, Elasticsearch changed the way it writes to the same location when there are multiple installations on the same machine and before that it used node.max_local_storage_nodes to allow multiple installations on the same machine.
Tips to fix the issue
Check if multiple Elasticsearch process is running and you intend to do it.
Check if Elasticsearch have write access to the location, as mentioned in the previous answer.
Check the version of elasticsearch and see if you are using the correct configuration.
Related
I have created a singularity container that's purpose to run a java program. Everything seems to work, expect that I get the following warning:
(java:54036): dconf-CRITICAL **: 23:37:10.142: unable to create directory '/run/user/175387/dconf': Read-only file system. dconf will not work properly.
From my search, I've learned that dconf is simply a system that stores configuration settings in some binary file. The singularity container is a read-only filesystem, so it is not surprising to me that issues such as this would come up.
When I invoke the singularity container, I can bind directories from the host OS. These bound directories may be writable. Therefore, my best guess is that all I need to do is change the dconf file to be located inside one of these bound, writable directories. So, my question is simply how this can be done.
The top-level dconf help output is:
Commands:
help Show this information
read Read the value of a key
list List the contents of a dir
write Change the value of a key
reset Reset the value of a key or dir
compile Compile a binary database from keyfiles
update Update the system databases
watch Watch a path for changes
dump Dump an entire subpath to stdout
load Populate a subpath from stdin
None of these seem to relate to changing the location of the dconf file.
One workaround might be to build a writable singularity container. But I'd prefer a different solution to keep the singularity container read-only.
The simplest solution, and the one I use for GUI programs run from singularity containers, is to mount the user's run directly to the container with -B /run/user/$UID.
If you are concerned about undesired dconf settings persisting for the user, I would suggest the --writable-tmpfs flag which creates a tmpfs overlay that can be used to modify the container image until the process completes.
My question:
How to store the current version of my software in a dump file generated by PostgreSQL?
The reason for my question:
I've developed a JAVA software using the PostgreSQL database. The software is installed locally on each user's computer, and the database is also local and individual for each user.
I've created a feature so that users can back up their databases and restore them. For this, my JAVA code runs pg_dump to generate the backup file and pg_restore to restore it. That is, the backup is nothing more than a dump of the database generated by the command below:
pg_dump.exe -U myuser -h localhost -p 5432 -Fc -f bkpname.bkp mydb
The problem is that I usually launch software updates. New versions of the software are always compatible with dumps from previous versions. However, older versions of the software are not compatible with dumps generated by a newer version.
Sometimes it happens that a user attempts to restore a recent version of dump in an old version of the software, which is not compatible.
I would like the dump file to have the information of which version of the software generated it. In this way, I could simply display a message informing the user that he needs to download the most current version of the software in order to restore the backup.
I thought of the two forms below, but I think they are not appropriate:
To save the software version to the dump file name. It would be bad
because the user could rename the file.
To concatenate the version inside the dump file content. I'm afraid
that the dump file might somehow be corrupted in the process of
entering text inside it or in the process of removing text of it
(before restoring the dump).
Is there a better way to add this information to the dump file?
One idea would be to store the information in a special table inside the database.
The table is not used normally, and you write the correct version into it right before you perform a dump.
Before you restore the whole dump, you first restore only that table:
pg_restore --table dump_version -d mydatabase dumpfile.dmp
Then you check what is in the table and proceed accordingly.
Does anyone know a good way to load a set of files locally into the Java dev_appserver's emulated Cloud Storage.
This didn't work:
$ gsutil rsync gs://mybucket http://localhost:8888/mybucket
InvalidUrlError: Unrecognized scheme "http".
I'm open to suggestions on either:
How to load a bunch of files locally (preferably through gsutil)
How to point my local dev_appserver to a non-emulated bucket at Google
This is painful to test things out locally without proper data. I'm trying to write some transformations to load data into BigQuery (from Datastore backups) and it won't be possible without some real data.
"How to point my local dev_appserver to a non-emulated bucket at Google": it's not documented all that clearly, but it is implemented in the dev_appserver and cloudstorage.
To verify what I'm saying, first svn checkout http://appengine-gcs-client.googlecode.com/svn/trunk/python gcs-client to get cloudstorage's source code onto your machine (you'll need to install subversion if you don't have it already, but, that's free too:-).
Then, cd gcs-client/src/cloudstorage/ and look at storage_api.py. In the very first function _get_storage_api, the docstring says:
On dev appserver, this instance by default will talk to a local stub
unless common.ACCESS_TOKEN is set. That token will be used to talk
to the real GCS.
So, look at common.py, and again in the first function, set_access_token, you'll see:
Args:
access_token: you can get one by run 'gsutil -d ls' and copy the
str after 'Bearer'.
So there you are -- in every entry to your app (best in appengine_config.py in your root directory), import cloudstorage's common module, then **if and only if you're on dev_appserver[*] call
common.set_access_token('whatever_the_token')
using as argument string the one you get by run 'gsutil -d ls', right after Bearer i.e among much else you'll spot something like (faking and way shortening the actual value...:-):
Bearer xy15.WKXJQEzXPQQy2dt7qK9\r\n
in which case you'd be calling
common.set_access_token('xy15.WKXJQEzXPQQy2dt7qK9')
[*] many ways to find out if you're on dev_appserver, e.g see GAE: python code to check if i'm on dev_appserver or deployed to appspot .
I have to create a jar with a java application that fulfills the following features:
There are xml data packed in the jar which are read the first time the application is started. with every consecutive start of the application the data are loaded from a dynamically created binary file.
A customer should not be able to reset the application to its primary state (e.g. if the binary file gets deleted for some reason, the application should fail to run again and give an error message).
All this should not depend on the os it is running on (which means e.g. setting a registry entry in windows won't do the job)
Summarizing I want to prevent a once started application to be reset in order to limit illegitimate reuse of the application.
Now to my ideas on how to accomplish that:
Delete the xml from the jar at the first run (so far I came to the understanding that it is not possible to let an application edit it's own jar. is that true?)
Set a variable/property/setting/whatever in the jar permanently at the first run (is that possible)
Any suggestions/ideas on how to accomplish that?
update:
I did not find a solution for this exact problem, but I found a simple workaround: along with my software I ship a certain file which gets changed after the program is started the first time. of course if someone keeps a copy of the original file he can always replace it and start over.
Any user able to delete the binary file, will, with enough time, also be able to revert any changes made in the jar. When the only existing part of the application is in the hand of the user, you won't able to prevent changes to it.
You can easily just store a backup of the original jar, make a copy, use that for one run, delete, copy the original jar, etc. You would need some sort of mechanism outside the users machine, like an activation server. The user gets one code to activate an account, and can't use that code again.
Am trying to read WAL files of the postgresql can any body tell me how to do that n what type of binary encoding is used in WAL binary files
Using pg_xlogdump to read WAL file (this contrib program added to PG 9.3 version - PG 9.3 released doc)
This utility can only be run by the user who installed the server,
because it requires read-only access to the data directory.
pg_xlogdump --help
pg_xlogdump decodes and displays PostgreSQL transaction logs for debugging.
Usage:
pg_xlogdump [OPTION]... [STARTSEG [ENDSEG]]
Options:
-b, --bkp-details output detailed information about backup blocks
-e, --end=RECPTR stop reading at log position RECPTR
-f, --follow keep retrying after reaching end of WAL
-n, --limit=N number of records to display
-p, --path=PATH directory in which to find log segment files
(default: ./pg_xlog)
-r, --rmgr=RMGR only show records generated by resource manager RMGR
use --rmgr=list to list valid resource manager names
-s, --start=RECPTR start reading at log position RECPTR
-t, --timeline=TLI timeline from which to read log records
(default: 1 or the value used in STARTSEG)
-V, --version output version information, then exit
-x, --xid=XID only show records with TransactionId XID
-z, --stats[=record] show statistics instead of records
(optionally, show per-record statistics)
-?, --help show this help, then exit
For example: pg_xlogdump 000000010000005A00000096
PostgreSQL Document or this blog
You can't really do that. It's easy enough to read the bytes from a WAL archive, but it sounds like you want to make sense of them. You will struggle with that.
WAL archives are a binary log showing what blocks changed in the database. They aren't SQL-level or row-level change logs, so you cannot just examine them to get a list of changed rows.
You probably want to investigate trigger-based replication or audit triggers instead.
The format is complicated and low-level as other answers imply.
However, if you have time to learn and understand the data that is stored, and know how to build the binary from source, there is a published reader for versions 8.3 to 9.2: xlogdump
The usual way to build it is as a contrib (Postgres add-on):
First get the source for the version of Postgres that you wish to view WAL data for.
./configure and make this, but no need to install
Then copy the xlogdump folder to the contrib folder (a git clone in that folder works fine)
Run make for xlogdump - it should find the parent postgres structure and build the binary
You can copy the binary to your path, or use it in-situ. Be warned, there is still a lot of internal knowledge of Postgres required before you will understand what you are looking at. If you have the database available, it is possible to attempt to reverse out SQL statements form the log.
To perform this in Java, you could either wrap the executable, link the C library as a hybrid, or figure out how to do the parsing you need from source. Any of those options are likely to involve a lot of detailed work.
The WAL files are in the same format as the actual database files themselves, and depends on the exact version of PostgreSQL that you are using. You will probably need to examine the source code for your particular version to determine the exact format.