ElasticSearch single node cluster runs out of memory - java

I have a single node ElasticSearch cluster that has one index (I know, my bad) where I inserted 2B documents.
I did not know it was a best practice to split indices and mine grew to 400GB before it crashed.
I tried splitting my index with (https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-split-index.html) and I keep getting java.lang.OutOfMemoryError no matter what I do. I have maxed out my physical memory and threads just got stuck in the _split.
I had some files that were deleted via logstash when they were successfully indexed, so reinserting the data is not an option.
Any suggestions?

Add swap space or increase RAM of that server.
I'm still confused as to where you got 2 Billion documents :/

Never use swap in ES Machines ,
Use https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-recovery.html to check status of split
also do you changed the max memory option in jvm config for ES - https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html

Related

Elasticsearch 5 stuck reading from disk

I have a cluster of 6 nodes with ES 5.4 with 4B small documents yet indexed.
Documents are organized in ~9K indexes, for a total of 2TB. The indexes' occupancy varies from few KB to hundreds of GB and they are sharded in order to keep each shard under 20GB.
Cluster health query responds with:
{
cluster_name: "##########",
status: "green",
timed_out: false,
number_of_nodes: 6,
number_of_data_nodes: 6,
active_primary_shards: 9014,
active_shards: 9034,
relocating_shards: 0,
initializing_shards: 0,
unassigned_shards: 0,
delayed_unassigned_shards: 0,
number_of_pending_tasks: 0,
number_of_in_flight_fetch: 0,
task_max_waiting_in_queue_millis: 0,
active_shards_percent_as_number: 100
}
Before sending any query to the cluster, it is stable and it gets a bulk index query every second with 10 or some thousand of documents with no problem.
Everything is fine until I redirect some traffic to this cluster.
As soon as it starts to respond the majority of the servers start reading from disk at 250 MB/s making the cluster unresponsive:
What it is strange is that I cloned this ES configuration on AWS (same hardware, same Linux kernel, but different Linux version) and there I have no problem:
NB: note that 40MB/s of disk read is what I always had on servers that are serving traffic.
Relevant Elasticsearch 5 configurations are:
Xms12g -Xmx12g in jvm.options
I also tested it with the following configurations, but without succeeded:
bootstrap.memory_lock:true
MAX_OPEN_FILES=1000000
Each server has 16CPU and 32GB of RAM; some have Linux Jessie 8.7, other Jessie 8.6; all have kernel 3.16.0-4-amd64.
I checked that cache on each node with localhost:9200/_nodes/stats/indices/query_cache?pretty&human and all the servers have similar statistics: cache size, cache hit, miss and eviction.
It doesn't seem a warm up operation, since on AWS cloned cluster I never see this behavior and also because it never ends.
I can't find useful information under /var/log/elasticsearch/*.
Am I doing anything wrong?
What should I change in order to solve this problem?
Thanks!
You probably need to reduce the number of threads for searching.
Try going with 2x the number of processors. In the elasticsearch.yaml:
threadpool.search.size:<size>
Also, that sounds like too many shards for a 6 node cluster. If possible, I would try reducing that.
The max content of an HTTP request. Defaults to 100mb
servers start reading from disk at 250 MB/s making the cluster unresponsive - The max content of an HTTP request. Defaults to 100mb. . If set to greater than Integer.MAX_VALUE, it will be reset to 100mb.
This will become unresponsive and you might see the logs related this. Check with the max read size of the indices.
Check with Elasticsearch HTTP
a few things;
5.x has been EOL for years now, please upgrade as a matter of urgency
you are heavily oversharded
for point 2 - you either need to;
upgrade to handle that amount of shards, the memory management in 7.X is far superior
reduce your shard count by reindexing
add more nodes to deal with the load

Apache NiFi - OutOfMemory Error: GC overhead limit exceeded on SplitText processor

I am trying to use NiFi to process large CSV files (potentially billions of records each) using HDF 1.2. I've implemented my flow, and everything is working fine for small files.
The problem is that if I try to push the file size to 100MB (1M records) I get a java.lang.OutOfMemoryError: GC overhead limit exceeded from the SplitText processor responsible of splitting the file into single records. I've searched for that, and it basically means that the garbage collector is executed for too long without obtaining much heap space. I expect this means that too many flow files are being generated too fast.
How can I solve this? I've tried changing nifi's configuration regarding the max heap space and other memory-related properties, but nothing seems to work.
Right now I added an intermediate SplitText with a line count of 1K and that allows me to avoid the error, but I don't see this as a solid solution for when the incoming file size will become potentially much more than that, I am afraid I will get the same behavior from the processor.
Any suggestion is welcomed! Thank you
The reason for the error is when splitting 1M records with a line count of 1, you are creating 1M flow files which equate 1M Java objects. Overall the approach of using two SplitText processors is common and avoids creating all of the objects at the same time. You could probably use an even larger split size on the first split, maybe 10k. For a billion records I am wondering if a third level would make sense, split from 1B to maybe 10M, then 10M to 10K, then 10K to 1, but I would have to play with it.
Some additional things to consider are increasing the default heap size from 512MB, which you may have already done, and also figuring out if you really need to split down to 1 line. It is hard to say without knowing anything else about the flow, but in a lot of cases if you want to deliver each line somewhere you could potentially have a processor that reads in a large delimited file and streams each line to the destination. For example, this is how PutKafka and PutSplunk work, they can take a file with 1M lines and stream each line to the destination.
I had a similar error while using the GetMongo processor in Apache NiFi.
I changed my configurations to:
Limit: 100
Batch Size: 10
Then the error disappeared.

Elasticsearch improve query performance

I'm trying to improve query performance. It takes an average of about 3 seconds for simple queries which don't even touch a nested document, and it's sometimes longer.
curl "http://searchbox:9200/global/user/_search?n=0&sort=influence:asc&q=user.name:Bill%20Smith"
Even without the sort it takes seconds. Here are the details of the cluster:
1.4TB index size.
210m documents that aren't nested (About 10kb each)
500m documents in total. (nested documents are small: 2-5 fields).
About 128 segments per node.
3 nodes, m2.4xlarge (-Xmx set to 40g, machine memory is 60g)
3 shards.
Index is on amazon EBS volumes.
Replication 0 (have tried replication 2 with only little improvement)
I don't see any noticeable spikes in CPU/memory etc. Any ideas how this could be improved?
Garry's points about heap space are true, but it's probably not heap space that's the issue here.
With your current configuration, you'll have less than 60GB of page cache available, for a 1.5 TB index. With less than 4.2% of your index in page cache, there's a high probability you'll be needing to hit disk for most of your searches.
You probably want to add more memory to your cluster, and you'll want to think carefully about the number of shards as well. Just sticking to the default can cause skewed distribution. If you had five shards in this case, you'd have two machines with 40% of the data each, and a third with just 20%. In either case, you'll always be waiting for the slowest machine or disk when doing distributed searches. This article on Elasticsearch in Production goes a bit more in depth on determining the right amount of memory.
For this exact search example, you can probably use filters, though. You're sorting, thus ignoring the score calculated by the query. With a filter, it'll be cached after the first run, and subsequent searches will be quick.
Ok, a few things here:
Decrease your heap size, you have a heap size of over 32gb dedicated to each Elasticsearch instance on each platform. Java doesn't compress pointers over 32gb. Drop your nodes to only 32gb and, if you need to, spin up another instance.
If spinning up another instance instance isn't an option and 32gb on 3 nodes isn't enough to run ES then you'll have to bump your heap memory to somewhere over 48gb!
I would probably stick with the default settings for shards and replicas. 5 shards, 1 replica. However, you can tweak the shard settings to suit. What I would do is reindex the data in several indices under several different conditions. The first index would only have 1 shard, the second index would have 2 shards, I'd do this all the way up to 10 shards. Query each index and see which performs best. If the 10 shard index is the best performing one keep increasing the shard count until you get worse performance, then you've hit your shard limit.
One thing to think about though, sharding might increase search performance but it also has a massive effect on index time. The more shards the longer it takes to index a document...
You also have quite a bit of data stored, maybe you should look at Custom Routing too.

Performance tuning , detecting and page faults

I am trying to Tune one of my applications on JAVA.
I am using JAVA-Profiler and got some reports from it.
I saw that the number of page -faults for application are ranging from 30000 to 35000 range.
How can I decide if this number is too high or normal ?
I am getting same data for initial one minute and after half an hour as well.
My RAM is 2 GB and I am using application with single thread.
Thread is only trying to read messages from queue every 3 seconds and queue is empty.
Since no processing is being done, I think that page faults should not occur at all.
Please guide me here.
When you start your JVM, it reserves the maximum heap size as a continuous block. However, this virtual memory is only turned into main memory as you access those pages. i.e. every time your heap grows by 4 KB, you get one page fault. You will also get page fault from thread stacks in the same manner.
Your 35K page faults suggests you are using about 140 MB of heap.
BTW You can buy 8 GB for £25. You might consider an upgrade.
What's your JVM? If it's HotSpot, you can use JVM options like -XX:LargePageSizeInBytes or -XX:+UseMPSS to force desired page sizes in order to minimize page swapping. I Think there should be similar options for other JVMs too.
Take a look at this:
http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html

How can I avoid OutOfMemoryErrors when updating documents in a Lucene index?

I am trying to refresh a Lucene index in incremental mode that is updating documents that have changed and keeping other unchanged documents as they are.
For updating changed documents, I am deleting those documents using IndexWriter.deleteDocuments(Query) and then adding updated documents using IndexWriter.addDocument().
The Query object used in the IndexWriter.deleteDocuments contains approx 12-15 terms. In the process of refreshing the index I also sometimes need to do a FULL refresh by deleting all the documents using IndexWriter.deleteDocuments and then adding the new documents.
The problem is when I called IndexWriter.flush() after say approx 100000 docs deletions, it takes a long time to execute and throws an OutOfMemoryError. If I disable flushing, the indexing goes fast upto say 2000000 docs deletions and then it throws an OutOfMemoryError. I have tried to set the IndexWriter.setRAMBufferSizeMB to 500 to avoid the out of memory error, but with no luck. The index size is 1.8 GB.
First. Increasing the RAM buffer is not your solution. As far as I understand it is a cache and I rather would argue that it is increasing your problem. An OutOfMemoryError is a JVM problem not a problem of Lucene. You can set the RAM buffer to 1TB - if your VM does not have enough memory, you have a problem anyway. So you can do two things: Increase JVM memory or decrease consumption.
Second. Have you already considered increasing heap memory settings? The reason why flushing takes forever is that the system is doing a lot of garbage collections shortly before it runs out of memory. This is a typical symptom. You can check that using a tool like jvisualvm. You need to install the GC details plugin first, but then you can select and monitor your crazy OutOfMemory app. If you have learned about your memory issue, you can increase maximum heap space like that:
java -Xmx512M MyLuceneApp (or however you start your Lucene application)
But, again, I would use tools to check your memory consumption profile and garbage collection behavior first. Your goal should be to avoid running low on memory, because this causes garbage collection to slow down your application down to no performance.
Third. Now if you increase your heap you have to be sure that you have enough native memory as well. Because if you do not (check with tools like top on Linux) your system will start swapping to disk and this will hit Lucene performance like crazy as well. Because Lucene is optimized for sequential disk reads and if your system starts to swap, your hard disk will do a lot of disk seeking which is 2 orders of magnitude slower than sequential reading. So it will be even worse.
Fourth. If you do not have enough memory consider deleting in batches. After a 1,000 or 10,000 documents do a flush, then again and again. The reason for this OutOfMemoryError is that Lucene has to keep everything in memory until you do the flush. So it might be a good idea anyway not to allow to flush batches that are too big, to avoid problems in the future.
On the (rare) occasion that I want to wipe all docs from my Lucene index, I find it much more efficient to close the IndexWriter, delete the index files directly and then basically starting a fresh index. The operation takes very little time and is guaranteed to leave your index in a pristine (if somewhat empty) state.
Try to use a smaller RamBufferedSize for your IndexWriter.
IndexWriter calss flush if the buffer full (or number of documents reaches a certain level). By setting the buffer size to a large number, you are implicitly postponing calling flush which can result in having too many documents in the memory.

Categories

Resources