I am coming to you to see if someone get my issue.
Let me explain the context:
We have a back side coded in Kotlin/Vertx and a front Side in Angular 8.
We upload and download some files on the datalake storage from Azure Cloud.
How we do that? An explorer is implemented in the front side and when we want to download a file, a request is sent to the back side and once it is finished we send a response success/error from back side to the front side.
Everything works well in local environement on upload or Download.
One particularity, In download side we can download ONE or SEVERAL files And when it is 2 or more files we compress them on back side before returning the response to the front side and download the 7zip file.
And as I said above, everything works well in our local environment.
But I am working on a big company and there some security as 4 minutes session TimesOut.
So when we have to download a file with a big size on my company environment even if it takes more than 4 minutes, it works because it hapenning something during the download (some packet are downloading).
But when we have to compress Big files, the time that the backside is zipping files (more than 4 minutes) the session cut (some firewall or something )
I tried to add a keep alive in front side or back side it doesn't worked.
I also tried every http code I found on the web but it its happening nothing.
Do you know how I can keep this session open by some function in Vertx?
Because, every thing I tried from vertx documentation or stackoverflow failed.
Thanks a lot for reading me,
Regards
Related
There is an application on Angular, the backend is an application on Java-Spring.
Both of them are running on a server in a shared network, on a Windows OC machine in VirtualBox (Linux).
The essence of the problem is that when you try to open a web application in a browser, it runs completely on one computer out of five with Windows OC and on one of one on Linux OC.
The browser is everywhere Chrome, only in Linux Mozilla
The application itself is launched, but it does not receive data from the backend at startup.
At the same time I get an error
Failed to load resource:
http://10.151.78.6:5003/es-serv/api/v1/get-data/sh1 net::ERR_CONNECTION_RESET
Here is the controller method that receives requests, there is no call to it in the logs
#GetMapping("/get-data/" + RestApiConstants.VARIABLE_NAME)
public ResponseEntity<ListResponse<DataDto>> getData(
#PathVariable(RestApiConstants.PARAM_NAME_WORD) String name) {
log.info("getData -> start");
return converterDtoService.converterDataDto(name);
}
Moreover, if you just try to open the link in the browser bar
http://10.151.78.6:5003/es-serv/api/v1/get-data/sh1
Then I get the data every time, I have never noticed any failures.
Very similar to the problem with CORS, but then the browser gives a specific error to all requests. Yes, and cors is disabled in the Java application. And even then it is not clear why it still works on some browsers.
It doesn't look like a timeout problem either, because I get an error instantly, and when the server doesn't respond, some time passes and it's noticeable.
And another such moment, I added a forced data reading button to the application. And after 20-30 attempts to read the data, the answer may still come to those computers that did not receive them.
If it was a problem with the network, then it is unclear why on the same computer from the same browser, the GET request typed in the browser line gets answers all the time, without a single pass.
Tell me where to look to understand the reason?
The problem was solved by updating the browsers to the latest version. And before that, the browser version was not very old. I don't understand how this could affect the transmission of the GET request over the network?
When I post a form with an image taken from my phone I reserve the error "413 request entity too large" I realize that an image included in the form taken by the phone camera is too large, and the server rejects the request... but how can I fix this issue, I'm using Java Spring framework, and MySQL database, all of this handled with Amazon aws services.
You have to modify your .platform as shown in the docs.
For example, you could have the following .platform/nginx/conf.d/myconfig.conf with content:
client_max_body_size 20M;
I struggled with this for so long until I came across this post:
https://medium.com/#robin.srimal/how-to-fix-a-413-request-entity-too-large-error-on-aws-elastic-beanstalk-ac2bb15f244d
Couple things to watch out for here, if your server is restarted or you deploy a new version etc, then your nginx server will also reset and you will need to perform these changes again. Also periodically, AWS seems to reset your EC2 instance address, not sure why, but you need to perform these changes again afterwards. There must be a way of making these changes permanent but I haven't figured that part out yet.
I find the solution thanks for helping...
1-I connect to my EC2 instance throw the "connect" button, a terminal appears.
2-I edit this file: etc/nginx/nginx.conf
3- Add this line:client_max_body_size 10M;
and it works fine.
thank you all ;
The web application is developed with Java/Jsp. One thing I want to do is download a directory from FTP to one user client pc. Since there are at least 700MB of files to be transferd in the directory, and the FTP download speed is sometimes very slow. When I was using an ativex to do the downloading, the web page just got stuck for a very long time. And when the downloading is done, the session times out. So how can I solve the downloading and the timeout things?
you can take reference from here
http://www.codejava.net/java-se/networking/ftp/how-to-download-a-complete-folder-from-a-ftp-serverIt worked for me
Really strange issue has been occurring lately with two legacy Struts applications running on separate RedHat 5/Tomcat 6 servers. Some brief details:
App 1 is the front-facing application
App 2 is an ancillary application which serves as a file repository system
App 1 has an upload form which forwards to App 2
App 2 expects multipart/form-data to be part of the Content-Type when an upload occurs
Uploading will work fine for a while, but will all of a sudden fail. When I look in the logs, App 2 is reporting that the Content-Type is missing and as such, cannot process the upload request. Furthermore, once it goes missing, it doesn't reappear. All attempts to upload will fail from that point forward and what's even more odd is that the only way to remedy the issue is to restart Tomcat hosting App 1, not App 2.
Other Oddities
Code that implements the upload feature has not changed in over a year
Using Wireshark (tshark) to sniff TCP packets
The Content-Type properly populated on the HTTP Request being sent from App1
Although Wireshark reports a malformed packet, the Content-Type is present on the HTTP Request received on App2
Any ideas why this could be happening?
I would suspect there is some sort of state change on App1 which is causing it to no longer user the Content-Type header in requests to App2. Without seeing the code there is little more that anyone could tell you.
All,
I have an issue with a remote ftp server that has kept me busy for three days now and I am going nuts over it. :(
A while ago, I wrote a simple ftp retriever class that uses apache commons-net 2.0. The class works fine on 5 different ftp servers, I can retrieve data as I want.
Now I have come across a server that I need to connect to that just won't let me list directories or retrieve data.
This is the order of commands that are being sent and retrieved by my class:
220 (vsFTPd 2.0.1)
USER XXXXXXX
331 Please specify the password.
PASS XXXXXXX
230 Login successful
TYPE I
200 Switching to Binary mode.
PASV
227 Entering Passive Mode (XXX,XXX,XXX,XXX,XXX,XXX)
NLST
150 Here comes the directory listing.
226 Directory send OK.
SYST
215 UNIX Type: L8
PASV
227 Entering Passive Mode (XXX,XXX,XXX,XXX,XXX,XXX)
LIST
150 Here comes the directory listing.
At the last line, my code hangs indefinitely (well, I killed it after 2 hours of waiting to see how long it would block). I have tried everything, from using an active connection to setting ASCII type to using different ftp libraries - always with the same result.
Normally, I would just call the guys and tell them that their server is configured incorrectly. However, connecting via FileZilla not only works but is lightning fast and never causes any problems. Also, connecting via command line on linux works like a charm.
I am totally lost here. Does anybody have any ideas why I have this problem?
Cheers
I cannot believe that I spent almost five days on this. After long sessions of rolling back changes, committing intermediate versions, debugging and about 15923 cups of coffee, I finally found the reason for all this mess.
It turns out that - for whatever reason - as soon as you package xpp3 drivers (as in XStream) in your ear and deploy this on JBoss 5.1, any connection via any ftp libraries will get messed up.
I have no idea if this is caused by other libraries interfering with xpp3 or if it is xpp3 itself. Frankly, I could not care less, either at the moment. All I know is that as soon as I removed that dependency from my project everything worked like a charm.
Damn you, xpp3 - I will sue you for the ten years of my life you cost me! :)
Thanks all for your help, I am going home now...
Suggestion: install Wireshark on the client machine and capture network traces under both working (filezilla) and non-working conditions to see what's different. If you're on Linux use the tcpdump command to capture the packets and then use Wireshark to examine them.