How configure zabbix to receive data? - java

I have create a host with name dev002-All-Series, added tapper item to it with key test.ping.count add host and ip addres to allowed hosts. Then I try to send a data with zabbix-metrics library with code like that:
private MetricRegistry metricRegistry;
private Meter pingMeter;
private void init() {
metricRegistry = new MetricRegistry();
metricRegistry.register("jvm.attribute.guage.set", new JvmAttributeGaugeSet());
ZabbixSender zabbixSender = new ZabbixSender("zabbixHost", 10051);
ZabbixReporter zabbixReporter = ZabbixReporter.forRegistry(metricRegistry)
.hostName(HostUtil.getHostName()).prefix("test.").build(zabbixSender);
//FIXME us right time unit and amount
zabbixReporter.start(10, TimeUnit.SECONDS);
pingMeter = metricRegistry.meter("ping");
}
Note that zabbix-metrics library surrond ping meter with test. prefix and .count posyfix.
So why I hae receive that I have failed to send my data? The response is:
{"response":"success","info":"processed: 0; failed: 8; total: 8; seconds spent: 0.000013"}
What is neccesary configure in addition in zabbix to send data? Also is there a way to the reason why zabbix do not receive data - does is logs such requests?

Possible popular reasons:
incorrect host name; make sure to match the "Host name" field (not "Visible name", not IP, not DNS...); note that it is case sensitive
incorrect item key; make sure it matches the one in the item key properties exactly - also case sensitive
incorrect allowed host field contents, or data coming from a different host than expected - check that field for syntax errors, remember that in older Zabbix versions spaces are not supported in that field and tcpdump your incoming connection - does it arrive from the host you expected ?
host/item not in the configuration cache - if you just added or changed host/item, it might not be in the config cache yet. The config cache is updated every 60 seconds by default
if the host is monitored by a Zabbix proxy, you must send data to that proxy
In general, forget your application for a moment and test with zabbix_sender. If that works, check what is your application doing differently. If that fails, check all the items above.
As for logging, currently Zabbix does not log failures or their reasons.

I have found a problem. It turns that metrics zabbix library do not convert data well (for version 0.0.1). It send clock value as long in milliseconds, while zabbix needs to receive it in seconds. After manual converting I got:
{"response":"success","info":"processed: 2; failed: 0; total: 2; seconds spent: 0.000016"}
I it very funy that even when I got 2 successful processed elements, zabbix do not show any values at graphic.
UPDATED
To get all works you should check not only clock in data object, but clock in reqeust too. By default metrics zabbix uses zabbix-sedner version 0.0.1 which send clocks in milliseconds. To make metrics works with zabbix 3.0 which expect clock time in seconds you should change zabbix-sedner version to 0.0.3. Here is a maven sample:
<dependency>
<groupId>io.github.hengyunabc</groupId>
<artifactId>metrics-zabbix</artifactId>
<version>0.0.1</version>
</dependency>
<dependency>
<groupId>io.github.hengyunabc</groupId>
<artifactId>zabbix-sender</artifactId>
<version>0.0.3</version>
</dependency>
See also

Related

PDF File Transfer from server to client: "java.io.IOException: An existing connection was forcibly closed by the remote host" [duplicate]

I am working with a commercial application which is throwing a SocketException with the message,
An existing connection was forcibly closed by the remote host
This happens with a socket connection between client and server. The connection is alive and well, and heaps of data is being transferred, but it then becomes disconnected out of nowhere.
Has anybody seen this before? What could the causes be? I can kind of guess a few causes, but also is there any way to add more into this code to work out what the cause could be?
Any comments / ideas are welcome.
... The latest ...
I have some logging from some .NET tracing,
System.Net.Sockets Verbose: 0 : [8188] Socket#30180123::Send() DateTime=2010-04-07T20:49:48.6317500Z
System.Net.Sockets Error: 0 : [8188] Exception in the Socket#30180123::Send - An existing connection was forcibly closed by the remote host DateTime=2010-04-07T20:49:48.6317500Z
System.Net.Sockets Verbose: 0 : [8188] Exiting Socket#30180123::Send() -> 0#0
Based on other parts of the logging I have seen the fact that it says 0#0 means a packet of 0 bytes length is being sent. But what does that really mean?
One of two possibilities is occurring, and I am not sure which,
The connection is being closed, but data is then being written to the socket, thus creating the exception above. The 0#0 simply means that nothing was sent because the socket was already closed.
The connection is still open, and a packet of zero bytes is being sent (i.e. the code has a bug) and the 0#0 means that a packet of zero bytes is trying to be sent.
What do you reckon? It might be inconclusive I guess, but perhaps someone else has seen this kind of thing?
This generally means that the remote side closed the connection (usually by sending a TCP/IP RST packet). If you're working with a third-party application, the likely causes are:
You are sending malformed data to the application (which could include sending an HTTPS request to an HTTP server)
The network link between the client and server is going down for some reason
You have triggered a bug in the third-party application that caused it to crash
The third-party application has exhausted system resources
It's likely that the first case is what's happening.
You can fire up Wireshark to see exactly what is happening on the wire to narrow down the problem.
Without more specific information, it's unlikely that anyone here can really help you much.
Using TLS 1.2 solved this error.
You can force your application using TLS 1.2 with this (make sure to execute it before calling your service):
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12
Another solution :
Enable strong cryptography in your local machine or server in order to use TLS1.2 because by default it is disabled so only TLS1.0 is used.
To enable strong cryptography , execute these commande in PowerShell with admin privileges :
Set-ItemProperty -Path 'HKLM:\SOFTWARE\Wow6432Node\Microsoft\.NetFramework\v4.0.30319' -Name 'SchUseStrongCrypto' -Value '1' -Type DWord
Set-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\.NetFramework\v4.0.30319' -Name 'SchUseStrongCrypto' -Value '1' -Type DWord
You need to reboot your computer for these changes to take effect.
This is not a bug in your code. It is coming from .Net's Socket implementation. If you use the overloaded implementation of EndReceive as below you will not get this exception.
SocketError errorCode;
int nBytesRec = socket.EndReceive(ar, out errorCode);
if (errorCode != SocketError.Success)
{
nBytesRec = 0;
}
Had the same bug. Actually worked in case the traffic was sent using some proxy (fiddler in my case). Updated .NET framework from 4.5.2 to >=4.6 and now everything works fine. The actual request was:
new WebClient().DownloadData("URL");
The exception was:
SocketException: An existing connection was forcibly closed by the
remote host
Simple solution for this common annoying issue:
Just go to your ".context.cs" file (located under ".context.tt" which located under your "*.edmx" file).
Then, add this line to your constructor:
public DBEntities()
: base("name=DBEntities")
{
this.Configuration.ProxyCreationEnabled = false; // ADD THIS LINE!
}
I've got this exception because of circular reference in entity.In entity that look like
public class Catalog
{
public int Id { get; set; }
public int ParentId { get; set; }
public Catalog Parent { get; set; }
public ICollection<Catalog> ChildCatalogs { get; set; }
}
I added [IgnoreDataMemberAttribute] to the Parent property. And that solved the problem.
If Running In A .Net 4.5.2 Service
For me the issue was compounded because the call was running in a .Net 4.5.2 service. I followed #willmaz suggestion but got a new error.
In running the service with logging turned on, I viewed the handshaking with the target site would initiate ok (and send the bearer token) but on the following step to process the Post call, it would seem to drop the auth token and the site would reply with Unauthorized.
Solution
It turned out that the service pool credentials did not have rights to change TLS (?) and when I put in my local admin account into the pool, it all worked.
I had the same issue and managed to resolve it eventually. In my case, the port that the client sends the request to did not have a SSL cert binding to it. So I fixed the issue by binding a SSL cert to the port on the server side. Once that was done, this exception went away.
For anyone getting this exception while reading data from the stream, this may help. I was getting this exception when reading the HttpResponseMessage in a loop like this:
using (var remoteStream = await response.Content.ReadAsStreamAsync())
using (var content = File.Create(DownloadPath))
{
var buffer = new byte[1024];
int read;
while ((read = await remoteStream.ReadAsync(buffer, 0, buffer.Length)) != 0)
{
await content.WriteAsync(buffer, 0, read);
await content.FlushAsync();
}
}
After some time I found out the culprit was the buffer size, which was too small and didn't play well with my weak Azure instance. What helped was to change the code to:
using (Stream remoteStream = await response.Content.ReadAsStreamAsync())
using (FileStream content = File.Create(DownloadPath))
{
await remoteStream.CopyToAsync(content);
}
CopyTo() method has a default buffer size of 81920. The bigger buffer sped up the process and the errors stopped immediately, most likely because the overall download speeds increased. But why would download speed matter in preventing this error?
It is possible that you get disconnected from the server because the download speeds drop below minimum threshold the server is configured to allow. For example, in case the application you are downloading the file from is hosted on IIS, it can be a problem with http.sys configuration:
"Http.sys is the http protocol stack that IIS uses to perform http communication with clients. It has a timer called MinBytesPerSecond that is responsible for killing a connection if its transfer rate drops below some kb/sec threshold. By default, that threshold is set to 240 kb/sec."
The issue is described in this old blogpost from TFS development team and concerns IIS specifically, but may point you in a right direction. It also mentions an old bug related to this http.sys attribute: link
In case you are using Azure app services and increasing the buffer size does not eliminate the problem, try to scale up your machine as well. You will be allocated more resources including connection bandwidth.
I got the same issue while using .NET Framework 4.5. However, when I update the .NET version to 4.7.2 connection issue was resolved. Maybe this is due to SecurityProtocol support issue.
For me, it was because the app server I was trying to send email from was not added to our company's SMTP server's allowed list.
I just had to put in SMTP access request for that app server.
This is how it was added by the infrastructure team (I don't know how to do these steps myself but this is what they said they did):
1. Log into active L.B.
2. Select: Local Traffic > iRules > Data Group List
3. Select the appropriate Data Group
4. Enter the app server's IP address
5. Select: Add
6. Select: Update
7. Sync config changes
Yet another possibility for this error to occur is if you tried to connect to a third-party server with invalid credentials too many times and a system like Fail2ban is blocking your IP address.
I was trying to connect to the MQTT broker using the GO client,
broker address was given as address + port, or tcp://address:port
Example: ❌
mqtt://test.mosquitto.org
which indicates that you wish to establish an unencrypted connection.
To request MQTT over TLS use one of ssl, tls, mqtts, mqtt+ssl or tcps.
Example: ✅
mqtts://test.mosquitto.org
In my case, enable the IIS server & then restart and check again.
We are using a SpringBoot service. Our restTemplate code looks like below:
#Bean
public RestTemplate restTemplate(final RestTemplateBuilder builder) {
return builder.requestFactory(() -> {
final ConnectionPool okHttpConnectionPool =
new ConnectionPool(50, 30, TimeUnit.SECONDS);
final OkHttpClient okHttpClient =
new OkHttpClient.Builder().connectionPool(okHttpConnectionPool)
// .connectTimeout(30, TimeUnit.SECONDS)
.retryOnConnectionFailure(false).build();
return new OkHttp3ClientHttpRequestFactory(okHttpClient);
}).build();
}
All our call were failing after the ReadTimeout set for the restTemplate. We increased the time, and our issue was resolved.
This error occurred in my application with the CIP-protocol whenever I didn't Send or received data in less than 10s.
This was caused by the use of the forward open method. You can avoid this by working with an other method, or to install an update rate of less the 10s that maintain your forward-open-connection.

Java WebSocket message limit

I'm trying to create communication between simple Java App (using java.net.http.WebSocket class) and remote google-chrome run using google-chrome --remote-debugging-port=9222 --user-data-dir=.
Sending and receiving small messages works as expected, but there is an issue in case of bigger messages, 16kb.
Here is part of java source:
var uri = new URI("ws://127.0.0.1:9222/devtools/page/C0D7B4DBC53FB39F7A4BE51DA79E96BB");
/// create websocket client
WebSocket ws = HttpClient
.newHttpClient()
.newWebSocketBuilder()
.connectTimeout(Duration.ofSeconds(30))
.buildAsync(uri, simpleListener)
.join();
// session Id attached to chrome tab
String sessionId = "...";
// send message
String message = "{\"id\":1,\"method\":\"Runtime.evaluate\",\"params\":{\"expression\":\"document.body.style.backgroundColor = 'blue';\",\"returnByValue\":true,\"awaitPromise\":true,\"userGesture\":true},\"sessionId\":\"" + sessionId + "\"}";
// this works
ws.send(message, true);
// generate big string contains over 18k chars for testing purpose
String bigMessage = "{\"id\":2,\"method\":\"Runtime.evaluate\",\"params\":{\"expression\":\"[" + ("1,".repeat(9000)) + "1]\",\"returnByValue\":true,\"awaitPromise\":true,\"userGesture\":true},\"sessionId\":\"" + sessionId + "\"}";
// this doesn't work
ws.send(bigMessage, true);
Here is stack:
java.net.SocketException: Connection reset
at java.base/sun.nio.ch.SocketChannelImpl.throwConnectionReset(SocketChannelImpl.java:345)
at java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:376)
at java.net.http/jdk.internal.net.http.SocketTube.readAvailable(SocketTube.java:1153)
at java.net.http/jdk.internal.net.http.SocketTube$InternalReadPublisher$InternalReadSubscription.read(SocketTube.java:821)
at java.net.http/jdk.internal.net.http.SocketTube$SocketFlowTask.run(SocketTube.java:175)
at java.net.http/jdk.internal.net.http.common.SequentialScheduler$SchedulableTask.run(SequentialScheduler.java:198)
...
I've tried basically the same by using puppeteer (nodejs library) and it works as expected.
I can't find any resource online about this issue.
Is there anything I'm missing in my example?
Here is url to simple example:
https://github.com/zeljic/websocket-devtools-protocol
Based on what I've seen so far, my best guess would be that Chrome Dev Tools do not process fragmented Text messages on that exposed webSocketDebuggerUrl endpoint. Whether Chrome Dev Tools can be configured to do so or not, is another question. I must note, however, that RFC 6455 (The WebSocket Protocol) mandates it:
Clients and servers MUST support receiving both fragmented and unfragmented messages.
There's one workaround I can see here. Keep in mind that this is unsupported and may change in the future unexpectedly. When running your client, specify the following system property on the command line -Djdk.httpclient.websocket.intermediateBufferSize=1048576 (or pick any other suitable size). As long as you keep sending your messages with true passed as boolean last argument to the send* methods, java.net.http.WebSocket will send messages unfragment, in a single WebSocket frame.
Well I had a similar issue when sending a big string by using web-sockets in java with a tomcat server.
There can be payload limit to send or receive in websocket server .
checkout org.apache.tomcat.websocket.textBufferSize in tomcat's doc. By default it is 8192 bytes try increasing the size.

Entity & Entity Properties. Database design for effective searching

Last two days i've been searching suitable solution for the problem described below.
In my standalone notification-service module I have an abstract Message entity. Message has 'to', 'from', 'sentAt', 'receivedAt' and other attributes. The responsibility of the notification-service is to:
send new messages using different registered message providers (SMS, EMAIL, Skype , etc).
receive new messages from registered message providers
update status for already sent messages.
Notification-service module is developed as standalone module that is available by SOAP protocol. A lot of clients can use this module to send or searching through already received messages.
Clients want to attach some properties (~ smth like tags) while sending messages for further searching messages by these properties. These properties make a sense only in client's environment.
For example, Client A might want to send message and save following custom properties :
1. Internal system id of user whom system sends message
2. Distinguish flag (whether id related to users / admins or clients)
3. Notification flag (notification/alert/ ...)
Client B might want to send message and save another set of custom properties :
1. Internal system operator id (who sends sms)
2. Template id that was used to send message
Custom properties can be used by the clients to search already sent messages.
For example:
Client A could find SMS messages sent to administrator users in period between [Date 1; Date 2] that have 'alert' status.
Client B could find all notification sent by specified template.
Of course, data should be fetched page by page.
At first I created the following database model:
Database scheme
To find all messages with specified properties I tried to use query:
SELECT * FROM (SELECT message_id FROM custom_message_properties
WHERE CONCAT(CONCAT(key, ':'), value) IN ('property1:value1', 'property2:value2')
GROUP BY message_id having(count(*)) = 2)
as cmp JOIN message m ON cmp.message_id = m.id ORDER BY ID LIMIT 100 OFFSET 0
Query worked fine (although it seems me not very good) in database with small data. I decided to check results for ~ real awaited data .
So i generated 10 000 000 messages that have 40 000 000 custom properties and checked result. Execution time was ~ 2 minutes. The most time consumed operation was following sub-select:
SELECT message_id FROM custom_message_properties
WHERE CONCAT(CONCAT(key, ':'), value) IN ('property1:value1', 'property2:value2')
I understand that string comparison is very slow cause database index feature is not used. I decided to change database structure to merge 'key' and 'value' columns into single one. So i updated by database scheme :
Updated database scheme
I checked result again. Now execution time was ~20 seconds. It's much better but still is not suitable for production use.
So now I have no idea how to improve performance without significant changes in application architecture design.
The only one thought i have is to create separate table for each client with required client properties.
client(i)_custom_properties {
mid bigint, // foreign key references message (id)
p1 type1,
p2 type2,
......
pn type(n)
}
I have spent a lot of time while trying to find any useful information. I have also analyzed 'stackoverflow' database cause it seemed me that it should be quite the same. But in 'stackoverflow' there are ~ 50 000 different tags. Not so much that my database could have.
Any help is appreciated. Thanks, in advance!
Project environment that i use :
Postgres database (9.6)
Java 1.8
Spring modules (spring-boot, spring-data-jpa + hibernate, spring-ws, etc).
I have not found any suitable solution except creating additional table with client's properties for each client.
I know, that solution is not so flexible,
but now search query time is less than 1 second.
In future, I will try to solve the same problem using noSQL data storage.

couchdb gen_server call timeout during purge

I'm running an analysis on time duration to run a couchdb purge using a java program. The couchdb connections and calls are handled using ektorp. For a small number of documents purging takes place and I receive a success response.
But when I purge ~ 10000 or more, I get the following error:
org.ektorp.DbAccessException: 500:Internal Server Error
URI: /dbname/_purge
Response Body:
{
"error" : "timeout",
"reason" : "{gen_server,call,
....
On checking the db status using a curl command, the actual purging has taken place. But this timeout does not allow me to monitor the actual time of the purging method in my java program since this throws an exception.
On some research, I believe this is due to a default timeout value of an erlang gen_server process. Is there anyway for me to fix this?
I have tried changing the timeout values of the StdHttpClient to no avail.
HttpClient authenticatedHttpClient = new StdHttpClient.Builder()
.url(url)
.username(Conf.COUCH_USERNAME)
.password(Conf.COUCH_PASSWORD)
.connectionTimeout(600*1000)
.socketTimeout(600*1000)
.build();
CouchDB Dev here. You are not supposed to use purge with large numbers of documents. This is to remove accidentally added data from the DB, like credit card or social security numbers. This isn’t meant for general operations.
Consequently, you can’t raise that gen_server timeout :)

Increase heartbeat value in spring rabbit

I'm facing some problems with my setup and I'm trying to increase the heartbeat interval in order to test a possible fix.
I'm using
Spring boot 1.3.2.RELEASE
Spring rabbit 1.5.3.RELEASE
And the code instantiating the connection factory is the below
RabbitConnectionFactoryBean connectionFactoryBean = new RabbitConnectionFactoryBean();
connectionFactoryBean.setUseSSL(useSsl);
connectionFactoryBean.setHost(rabbitHostname);
connectionFactoryBean.setVirtualHost(rabbitVhost);
connectionFactoryBean.setUsername(rabbitUsername);
connectionFactoryBean.setPassword(rabbitPassword);
connectionFactoryBean.setConnectionTimeout(900000);
connectionFactoryBean.setRequestedHeartbeat(900);
connectionFactoryBean.afterPropertiesSet();
CachingConnectionFactory cf = new CachingConnectionFactory(connectionFactoryBean.getObject());
cf.setChannelCacheSize(40);
return cf;
The problem is that the heartbeat interval is not changing. I quick look in AMQConnection reveals the below
int heartbeat = negotiatedMaxValue(this.requestedHeartbeat,
connTune.getHeartbeat());
private static int negotiatedMaxValue(int clientValue, int serverValue) {
return (clientValue == 0 || serverValue == 0) ?
Math.max(clientValue, serverValue) :
Math.min(clientValue, serverValue);
}
The value coming from the server is 60. The method negotiatedMaxValue will not respect the client's preferences (cannot disable heartbeat nor increase it). Am I missing something?
You are correct. The AMQConnection will determine the heartbeat value based on that method and then sends that value with the TuneOk method to the server (https://www.rabbitmq.com/amqp-0-9-1-reference.html#connection.tune-ok). You can see it sends the result of the negotiatedMaxValue() a few lines down from where you see the call to the method:
_channel0.transmit(new AMQP.Connection.TuneOk.Builder()
.channelMax(channelMax)
.frameMax(frameMax)
.heartbeat(heartbeat)
.build());
It seems based on the logic of the code that you can only reduce the heartbeat but the maximum heartbeat will be whatever the server sends and can't be increased more than that. RabbitMQ documentation is a little vague on the specifics of being able to increase the heartbeat that the server initially sends but does say it can be overwritten: https://www.rabbitmq.com/heartbeats.html
I checked in the latest version of spring rabbit and it still has the same configuration so doesn't look like it is changing anytime soon.
Checking the RabbitMQ GitHub doesn't show any existing issues around setting the heartbeat value greater than the server's sent value. Maybe submit an issue there and see what the developers say? https://github.com/rabbitmq/rabbitmq-java-client/issues?utf8=%E2%9C%93&q=heartbeat

Categories

Resources