When creating a channel like this
ManagedChannelBuilder.forAddress("mybackend", 6565)
And mybackend is a DNS A record with multiple IP addresses.
Does GRPC round-robin between the records or does it just stick to one for the lifetime of the channel?
If not, would it work if I do?
ManagedChannelBuilder.forTarget("dns:///mybackend:6565")
Or is this capability just not available?
NettyChannelBuilder.forAddress(SocketAddress) is the API that can only access a single IP, due to InetSocketAddress's eager resolution.
forAddress(String, int) was retrofit to use forTarget(String) internally. So it is a convenience and converts to something similar to forTarget(host + ":" + port), but with some extra logic to handle IPv6 literals. The "dns:///" prefix is added to target strings when they fail to parse. See the forTarget() docs for both details. So it is essentially equivalent to using forTarget("dns:///mybackend:6565").
gRPC, by default, doesn't round-robin over multiple addresses. By default it does "pick-first" which stops on the first working address (potentially choosing a different address when reconnecting). You can change that via a service config or defaultLoadBalancingPolicy("round_robin").
I am working with a commercial application which is throwing a SocketException with the message,
An existing connection was forcibly closed by the remote host
This happens with a socket connection between client and server. The connection is alive and well, and heaps of data is being transferred, but it then becomes disconnected out of nowhere.
Has anybody seen this before? What could the causes be? I can kind of guess a few causes, but also is there any way to add more into this code to work out what the cause could be?
Any comments / ideas are welcome.
... The latest ...
I have some logging from some .NET tracing,
System.Net.Sockets Verbose: 0 : [8188] Socket#30180123::Send() DateTime=2010-04-07T20:49:48.6317500Z
System.Net.Sockets Error: 0 : [8188] Exception in the Socket#30180123::Send - An existing connection was forcibly closed by the remote host DateTime=2010-04-07T20:49:48.6317500Z
System.Net.Sockets Verbose: 0 : [8188] Exiting Socket#30180123::Send() -> 0#0
Based on other parts of the logging I have seen the fact that it says 0#0 means a packet of 0 bytes length is being sent. But what does that really mean?
One of two possibilities is occurring, and I am not sure which,
The connection is being closed, but data is then being written to the socket, thus creating the exception above. The 0#0 simply means that nothing was sent because the socket was already closed.
The connection is still open, and a packet of zero bytes is being sent (i.e. the code has a bug) and the 0#0 means that a packet of zero bytes is trying to be sent.
What do you reckon? It might be inconclusive I guess, but perhaps someone else has seen this kind of thing?
This generally means that the remote side closed the connection (usually by sending a TCP/IP RST packet). If you're working with a third-party application, the likely causes are:
You are sending malformed data to the application (which could include sending an HTTPS request to an HTTP server)
The network link between the client and server is going down for some reason
You have triggered a bug in the third-party application that caused it to crash
The third-party application has exhausted system resources
It's likely that the first case is what's happening.
You can fire up Wireshark to see exactly what is happening on the wire to narrow down the problem.
Without more specific information, it's unlikely that anyone here can really help you much.
Using TLS 1.2 solved this error.
You can force your application using TLS 1.2 with this (make sure to execute it before calling your service):
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12
Another solution :
Enable strong cryptography in your local machine or server in order to use TLS1.2 because by default it is disabled so only TLS1.0 is used.
To enable strong cryptography , execute these commande in PowerShell with admin privileges :
Set-ItemProperty -Path 'HKLM:\SOFTWARE\Wow6432Node\Microsoft\.NetFramework\v4.0.30319' -Name 'SchUseStrongCrypto' -Value '1' -Type DWord
Set-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\.NetFramework\v4.0.30319' -Name 'SchUseStrongCrypto' -Value '1' -Type DWord
You need to reboot your computer for these changes to take effect.
This is not a bug in your code. It is coming from .Net's Socket implementation. If you use the overloaded implementation of EndReceive as below you will not get this exception.
SocketError errorCode;
int nBytesRec = socket.EndReceive(ar, out errorCode);
if (errorCode != SocketError.Success)
{
nBytesRec = 0;
}
Had the same bug. Actually worked in case the traffic was sent using some proxy (fiddler in my case). Updated .NET framework from 4.5.2 to >=4.6 and now everything works fine. The actual request was:
new WebClient().DownloadData("URL");
The exception was:
SocketException: An existing connection was forcibly closed by the
remote host
Simple solution for this common annoying issue:
Just go to your ".context.cs" file (located under ".context.tt" which located under your "*.edmx" file).
Then, add this line to your constructor:
public DBEntities()
: base("name=DBEntities")
{
this.Configuration.ProxyCreationEnabled = false; // ADD THIS LINE!
}
I've got this exception because of circular reference in entity.In entity that look like
public class Catalog
{
public int Id { get; set; }
public int ParentId { get; set; }
public Catalog Parent { get; set; }
public ICollection<Catalog> ChildCatalogs { get; set; }
}
I added [IgnoreDataMemberAttribute] to the Parent property. And that solved the problem.
If Running In A .Net 4.5.2 Service
For me the issue was compounded because the call was running in a .Net 4.5.2 service. I followed #willmaz suggestion but got a new error.
In running the service with logging turned on, I viewed the handshaking with the target site would initiate ok (and send the bearer token) but on the following step to process the Post call, it would seem to drop the auth token and the site would reply with Unauthorized.
Solution
It turned out that the service pool credentials did not have rights to change TLS (?) and when I put in my local admin account into the pool, it all worked.
I had the same issue and managed to resolve it eventually. In my case, the port that the client sends the request to did not have a SSL cert binding to it. So I fixed the issue by binding a SSL cert to the port on the server side. Once that was done, this exception went away.
For anyone getting this exception while reading data from the stream, this may help. I was getting this exception when reading the HttpResponseMessage in a loop like this:
using (var remoteStream = await response.Content.ReadAsStreamAsync())
using (var content = File.Create(DownloadPath))
{
var buffer = new byte[1024];
int read;
while ((read = await remoteStream.ReadAsync(buffer, 0, buffer.Length)) != 0)
{
await content.WriteAsync(buffer, 0, read);
await content.FlushAsync();
}
}
After some time I found out the culprit was the buffer size, which was too small and didn't play well with my weak Azure instance. What helped was to change the code to:
using (Stream remoteStream = await response.Content.ReadAsStreamAsync())
using (FileStream content = File.Create(DownloadPath))
{
await remoteStream.CopyToAsync(content);
}
CopyTo() method has a default buffer size of 81920. The bigger buffer sped up the process and the errors stopped immediately, most likely because the overall download speeds increased. But why would download speed matter in preventing this error?
It is possible that you get disconnected from the server because the download speeds drop below minimum threshold the server is configured to allow. For example, in case the application you are downloading the file from is hosted on IIS, it can be a problem with http.sys configuration:
"Http.sys is the http protocol stack that IIS uses to perform http communication with clients. It has a timer called MinBytesPerSecond that is responsible for killing a connection if its transfer rate drops below some kb/sec threshold. By default, that threshold is set to 240 kb/sec."
The issue is described in this old blogpost from TFS development team and concerns IIS specifically, but may point you in a right direction. It also mentions an old bug related to this http.sys attribute: link
In case you are using Azure app services and increasing the buffer size does not eliminate the problem, try to scale up your machine as well. You will be allocated more resources including connection bandwidth.
I got the same issue while using .NET Framework 4.5. However, when I update the .NET version to 4.7.2 connection issue was resolved. Maybe this is due to SecurityProtocol support issue.
For me, it was because the app server I was trying to send email from was not added to our company's SMTP server's allowed list.
I just had to put in SMTP access request for that app server.
This is how it was added by the infrastructure team (I don't know how to do these steps myself but this is what they said they did):
1. Log into active L.B.
2. Select: Local Traffic > iRules > Data Group List
3. Select the appropriate Data Group
4. Enter the app server's IP address
5. Select: Add
6. Select: Update
7. Sync config changes
Yet another possibility for this error to occur is if you tried to connect to a third-party server with invalid credentials too many times and a system like Fail2ban is blocking your IP address.
I was trying to connect to the MQTT broker using the GO client,
broker address was given as address + port, or tcp://address:port
Example: ❌
mqtt://test.mosquitto.org
which indicates that you wish to establish an unencrypted connection.
To request MQTT over TLS use one of ssl, tls, mqtts, mqtt+ssl or tcps.
Example: ✅
mqtts://test.mosquitto.org
In my case, enable the IIS server & then restart and check again.
We are using a SpringBoot service. Our restTemplate code looks like below:
#Bean
public RestTemplate restTemplate(final RestTemplateBuilder builder) {
return builder.requestFactory(() -> {
final ConnectionPool okHttpConnectionPool =
new ConnectionPool(50, 30, TimeUnit.SECONDS);
final OkHttpClient okHttpClient =
new OkHttpClient.Builder().connectionPool(okHttpConnectionPool)
// .connectTimeout(30, TimeUnit.SECONDS)
.retryOnConnectionFailure(false).build();
return new OkHttp3ClientHttpRequestFactory(okHttpClient);
}).build();
}
All our call were failing after the ReadTimeout set for the restTemplate. We increased the time, and our issue was resolved.
This error occurred in my application with the CIP-protocol whenever I didn't Send or received data in less than 10s.
This was caused by the use of the forward open method. You can avoid this by working with an other method, or to install an update rate of less the 10s that maintain your forward-open-connection.
Our application has server/client side. The client supports both offline and online work mode.
So I need to test the client when server down, regain connective.
Question comes. How to simulate server down. Use codes to switch from down to ready, or from ready to down state.
Thanks in advance.
Joseph
update:
Actually, I could not extend the server interface to response the incorrect status. In my test scenario, the server is transparent. So incorrect url + port is a solution to do this.
But I could not modify the url when the session is valid. Another method is modify the hosts file to do this. I have to face the privilege issue in Windows.
Depends on what you mean by "server down". Possible options are:
Write a fake/dummy server that can return error messages corresponding to being down for test purposes.
Change the IP address of the server that your client looks for to a non-existing one so that it will think that the server is entirely down.
The basic idea is to mock the behavior of your server somehow. You could use mocking frameworks to do so.
You could also create manual mocks for testing purposes. Let the "proxy" of the server on the client implement this interface:
public interface IServer
{
bool foo();
}
You could create a "fake" implementation of that server and return whatever you'd like
public class FakeOfflineServer implements IServer
{
public bool foo()
{
// throw some exception here.
}
}
This approach allows you to fake different scenarios (no network connectivity, invalid credentials, etc.)
You could also use composition to switch from up to down in your tests:
public bool FakeServer implements IServer
{
private IServer offline = new FakeOfflineServer();
private IServer online = new Server();
public bool isUp = false;
private IServer getServer()
{
return isUp ? online : offline;
}
public bool foo()
{
return getServer().foo();
}
}
While testing server down, give any incorrect URL OR Port (Prefered). For recovery give the correct URL/Port.
This depends where you are testing. If you're unit testing, the best option is the mocking suggested by Bryan Menard.
If you're testing in an integration or production environment, You can actually cut the connection between you and the server.
Depending upon your operating system, you can do this in a number of ways.
For Windows based systems, Fiddler is fantastic. You can simulate almost anything, including delays on the requests and indeed just throwing requests away. It does not require admin access for Windows.
For linux based systems, one technique I've used in the past is to use a proxy server, or cut the port at operating system level. You can do this using iptables for instance:
To deny access to a particular port (25 in this case)
/sbin/iptables -I OUTPUT -p tcp --dest 127.0.0.1 --dport 25 -j DROP
and to allow it again:
/sbin/iptables --delete OUTPUT 1
You'll need root acces for this to work, but it does have the advantage that you don't need to touch your server or client configuration.
To emulate the server down case, you could write a ServerAlwaysDown class extending your actual server, but throwing ServerException (HTTP 500) for every connection.
If you want to be thorough use always the closest you have to a production environment for the tests, put client and servers in different machines and cut the connection, then restore it.
String[] orbargs= {};
org.omg.CORBA.ORB orb = org.omg.CORBA.ORB.init(orbargs, null);
org.omg.CORBA.Object cobj = orb.string_to_object("corbaloc:iiop:10.1.1.200:6969/OurServiceHelper");
_OurServiceHelper cpsh = _OurServiceHelperHelper.narrow(cobj); // Get's stuck
cpsh.ourMethod();
That narrow just hangs.
My service is setup to run on a static port. And we know it works since we usually look it up through the NamingService.
What am I doing wrong?
If you're using the NamingService, you should actually be using a corbaname url instead of a corbaloc url. The below will work if your naming service is on port 6969. If "OurServiceHelper" is on 6969 but the NamingService is on a different port, you need to specify the port of the naming service in the url below instead of 6969. The port of the server object is embedded in the ior returned by the NamingService so that's why it doesn't need to be specified.
"corbaname:10.1.1.200:6969#OurServiceHelper"
Re: Comment:
First a note about IORs and serving up objects. If you want your served objects to be persistent across process restarts, you have to set the PERSISTENTlifetime policy on the POA that contains the objects. Also, the IOR embeds the ip and port of the server, so if you want to generate IORs that remain consistent across restarts you have to use a static IP and port number as well as using the persistent lifetime policy.
The name service makes things easier by allowing you not to have to worry about a lot of this stuff. As long as the name service is reachable at a known location, all of your server objects can just register themselves with the name service when they are instantiated and clients can then access them without having to know where they're located.
If you're determined not to use the name service you're code will have to change somewhat. If you use the corbalocurl then you are using the Interoperable Naming Service (INS). See: http://java.sun.com/j2se/1.4.2/docs/guide/idl/INStutorial.html. Using the INS, you need to use the functionality of the NamingContextExt object. Specifically, to resolve the corabloc url that you construct you should call the NamingContextExt::resolve_strfunction and pass in the url.
The key part of the corbaloc URL (string after the slash), could possibly be incorrect or not registered correctly, and the serverside orb isn't able to map the key to the object reference.
How are you running the server?
This should work:
<server> -ORBInitRef OurServiceHelper="file://server.ior"
So when the corbaloc request comes in the orb should be able to match the key to the ior and return you the ior. Different ORB's have different ways of doing this for registering an initial reference, TAO have a propriety interface called IORTable for example.
The corbaloc has no type info in it, so the ORB is checking the type you're narrowing to by making a remote call (_is_a). Try using an unchecked narrow, which won't call _is_a:
_OurServiceHelper cpsh = _OurServiceHelperHelper.narrow(cobj);
It's odd that the _is_a call doesn't return for you. My guess is that the unchecked_narrow will work work (you'll get a non-null result), but the object reference won't work.
As of now, I'm using the below code to get DNS name of the given IPAddress. Instead of fetching it for each IPAddress in the network, I want to fetch all the DNS entries (IPAddress - HostName mapping) from the DNS Server in one go. Is it possible? If so, how to do it?
InetAddress addr = InetAddress.getByName(address);
dnsname = addr.getCanonicalHostName().trim();
From a public DNS server, there is no way to pull out all the data it holds. Enumerating all the IP addresses one by one is the only solution.
If you have a special relationship with the DNS server (for instance, it is managed by your employer), you may request from the DNS administrator a right to transfer the whole zone (the DNS request known as AXFR). They may authorize your IP address or gives you a TSIG key to authentify yourself.
Then, you will have to find a way to do a zone transfer (possibly with TSIG authentication) in Java. Using these keywords, I find some code and documentation. Use a code search engine like Google Code Search or Krugle to find examples of use.
[DNS experts will probably scream "Use zone walking on NSEC" but most DNS zones are not signed with NSEC.]