Catch the download link automatically like IDM? - java

Here is my code for downloading a file I am passing a URL, but I want to make my download manager catch the link automatically
private void button1_Click_1(object sender, EventArgs e)
{
WebClient client = new WebClient();
client.DownloadProgressChanged += new DownloadProgressChangedEventHandler(ProgressChanged);
client.DownloadFileCompleted += new AsyncCompletedEventHandler(client_DownloadFileCompleted);
string []filename;
string fn ; fn=textBox1.Text;
int i;
filename =fn.Split('/');
for (i = 1; i <= filename.Length; i++) ;
string oname=filename[i-2];
//////// Starts the download
client.DownloadFileAsync(new Uri("http://dinirah.wen.ru/Internet-k-zariye-musalmano-gumrah-kia-ja-raha.mp3"), textBox2.Text+"\\"+oname);
//////// the above i passed the url to download.
label4.Text = i.ToString();
button1.Text = "Download In Process";
button1.Enabled = false;
}
Now I want to make my downloader automatically get the download url, and store it in a string which it then passes as an URL later.

You should develop browser extension(s) and then you would be able to intercept the file download process.
The other way is to monitor your computer's network connections and intercept when a desired file type is being requested. I have found that IDM intercepts my downloads even if I am downloading it using my programs (If enabled in IDM to intercept any download). Be careful with windows update files, diagnostic files and any other unknown file.

Related

Rsync with AWS lambda(java) between Bucket and Remote Server

I want to perform an one direction rsync between an AWS S3 Bucket and a remote ftp server (accepts ftps) with a java lambda function. So if one file in bucket is deleted the lambda cron should remove it from the remote ftp server.
I read that aws cli offers the function s3 sync. Could this be an option?
best regards
Jannik
This would be pretty straight forward. The Lambda would be setup to be triggered on an S3 delete. The basic code (untested) would be something like:
public class Handler implements RequestHandler<S3Event, String> {
public String handleRequest(S3Event s3event, Context context) {
try {
S3EventNotificationRecord record = s3event.getRecords().get(0);
// Object key may have spaces or unicode non-ASCII characters.
String srcKey = record.getS3().getObject().getUrlDecodedKey();
// now use Apache Commons Net
// (https://commons.apache.org/proper/commons-net/)
// to delete the file on the FTP server
FTPClient ftpClient = new FTPClient();
ftpClient.connect(server, port);
int replyCode = ftpClient.getReplyCode();
if (!FTPReply.isPositiveCompletion(replyCode)) {
contect.getLogger().log("SFTP Connect failed");
return;
}
boolean success = ftpClient.login(user, pass);
if (!success) {
contect.getLogger().log("Could not login to the FTP server");
return;
}
String fileToDelete = "/some/ftp/directory/" + srcKey;
boolean deleted = ftpClient.deleteFile(fileToDelete);
if (deleted) {
contect.getLogger().log("The file was deleted successfully.");
} else {
contect.getLogger().log("Could not delete the file, it may not exist.");
}
}
catch (IOException e) {
throw new RuntimeException(e);
}
}
On the S3 side, you will need to enable your S3 bucket to send a delete event to your Lambda. This can be done in the AWS console by selecting the bucket and in the advanced section, add select Events, add a notification, select "Permanently deleted" (or "All object delete events") and add your Lambda.

Selenium: download file in Internet Explorer to specified folder without direct link, without Windows Forms, without AutoIt or Robot

I've often faced an issue, how to download files in IE.
In contrast to Chrome of Firefox, you cannot just specify required folder, and all the files will be downloaded to that folder. You also need to interact with native Windows forms and so on.
There are multiple options, like using AutoIt, using keyboard commands, Robot and etc... But all this options aren't stable, they require explicit waiting, using redundant libraries, and non-appropriate when run tests in parallel. The other problem, is what to do, if the file isn't downloaded by direct link, but link is generated from javascript command or received from server, and cannot be extracted from html.
All these problems can be solved, here in hte answer i'll show how to do it.
Solution is written in c#, i believe the same can be implemented in java
Method DownloadFileIexplore will download file to the specified filePath (folder + filename), e.g. DownloadFileExplore("C:\my_folder\report.xslx")
public void DownloadFileIexplore(string filePath)
{
//Click the link.
//Simple click can be used instead, but in my case it didn't work for all the links, so i've replaced it with click via action:
new Actions(Browser.Driver).MoveToElement(Element).Click(Element).Perform();
//Different types of element can be used to download file.
//If the element contains direct link, it can be extracter from 'href' attribute.
//If the element doesn't contains href or it's just an javascript command, link will be extracted from the browser http requests.
//
string downloadUrlOrLink = Element.GetAttribute("href") != null && !Element.GetAttribute("href").Contains("javascript")
? Element.GetAttribute("href")
: GetRequestUrls().Last() ?? GetLinkInNewWindowIexplore();
if (downloadUrlOrLink == null)
{
throw Log.Exception("Download url cannot be read from the element href attribute and from the recent http requests.");
}
/// the last step is to download file using CookieWebClient
/// method DownloadFile is available in the System.Net.WebClient,
/// but we have to create new class CookieWebClient, that will be inherited from System.Net.WebClient with one overriden method
new CookieWebClient(GetCookies()).DownloadFile(downloadUrlOrLink, filePath);
}
/// <summary>
/// this method returns all the http requests sent from browser.
/// the latest requests was send when link (or button) was clicked to download file
/// so we will need just to get last element from list: GetRequestUrls().Last().
/// or, if the request for file downloading isn't the last, find the required request by part of url, in my case it was 'common/handler', e.g.:
/// GetRequestUrls().LastOrDefault(x => x.Contains("common/handler"))
/// <summary>
public List<string> GetRequestUrls()
{
ReadOnlyCollection<object> requestsUrls = (ReadOnlyCollection<object>)
Driver.ExecuteScript("return window.performance.getEntries().map(function(x) { return x.name });");
return requestsUrls.Select(x => (string) x).ToList();
}
/// <summary>
/// In some cases after clicking the Download button new window is opened in IE.
/// Driver.WindowHandles can return only one window instead of two.
/// To solve this problem reset IE security settings and set Enable Protected Mode for each zone.
/// </summary>
private string GetLinkInNewWindowIexplore()
{
/// here it would be better to add waiting till new window is opened.
/// in that case we have to calculate number of windows before click and send this number as argument to GetLinkInNewWindowIexplore()
/// and wait till number will be increased by 1
var availableWindows = Driver.WindowHandles;
if (availableWindows.Count > 1)
{
Driver.SwitchTo().Window(availableWindows[availableWindows.Count - 1]);
}
string url;
try
{
url = Driver.Url;
}
catch (Exception)
{
url = Driver.ExecuteScript("return document.URL;").ToString();
}
Driver.SwitchTo().Window(Driver.WindowHandles[0]);
return url;
}
public System.Net.CookieContainer GetCookies()
{
CookieContainer cookieContainer = new CookieContainer();
foreach (OpenQA.Selenium.Cookie cookie in Driver.Manage().Cookies.AllCookies)
{
cookieContainer.Add(new System.Net.Cookie
{
Name = cookie.Name,
Value = cookie.Value,
Domain = "domain of your site, you can find, track http requests send from your site in browser dev tools, tab Network"
});
}
return cookieContainer;
}
public class CookieWebClient : WebClient
{
private readonly CookieContainer _cookieContainer;
public CookieWebClient(CookieContainer cookieContainer)
{
_cookieContainer = cookieContainer;
}
/// it's necessary to override method to add cookies, because file cannot be download by non-authorized user
/// ServerCertificateValidationCallback is set to true to avoid some possible certificate errors
protected override WebRequest GetWebRequest(Uri address)
{
ServicePointManager.ServerCertificateValidationCallback = delegate { return true; };
WebRequest request = base.GetWebRequest(address);
HttpWebRequest webRequest = request as HttpWebRequest;
if (webRequest != null)
{
webRequest.CookieContainer = _cookieContainer;
}
return request;
}
}

CSV file truncated after download

The legacy J2EE application details :
JSP + Servlets(2.4)
Websphere Application Server 7.0
The view is using IE frames, core javascript and so on
The user's action :
User's search returning > 900 rows takes some time to display (NO pagination)
User then clicks on 'Download' button which again triggers a form submit.
Following is the code snippet that is executed in the action servlet :
public class DownloadFileEvent extends ActionGeneric {
java.text.SimpleDateFormat df_file = new java.text.SimpleDateFormat("yyyyMMdd_HHmmss");
public void run(HttpServletRequest request, HttpServletResponse response) throws Exception {
String errormsg = null;
StringBuffer LineBuffer = null;
// read parameters.
String _v = request.getParameter("view");
// Start traitment.
try {
// get sessions.
ServletOutputStream out = response.getOutputStream();
// Create title columns.
response.setContentType("application/csv");
response.setHeader("Content-Disposition", "inline; filename = " + getFilename(_v));
LineBuffer = new StringBuffer();
//Get the string response from some business method
//String v_wrk = getOutflow(request, _v).toString();
LineBuffer.append(v_wrk);
LineBuffer.append("\r\n");
out.print(LineBuffer.toString());
out.flush();
// end.
}// fin try
catch (Exception e) {
errormsg = e.getMessage();
} finally {
// to do.
}
}// end run.
}// fin class
The issue :
Since the 'Download' takes some time, the user moves to other screen
When he comes back, the 'Open/Save/Save As' prompt is there already for some time. Now when user saves/opens the file but instead of 900 rows, there are less than 100 rows
Surprisingly, if the open/save is done immediately, all the rows are downloaded
In the catch block, I had put a log but there is no exception anywhere
The issue is not simulated on my local machine(Windows, WAS 7) or in the SYSTEM test environment(Linux, WAS 8.5) but surfaces on ACCEPTANCE (WAS 7, Linux) and PRODUCTION(WAS 7, Linux). The ACCEPTANCE and PRODUCTION have load balancers, web server set up but NOT in systest or local
How shall I proceed ?
Try a high SendBufferSize in your web servers. If the client does not read while the dialog is up, this will prevent the webserver from seeing writes eventually block then timeout.

How to set HTTP header in Apache JClouds?

I'm using Apache JClouds to connect to my Openstack Swift installation. I managed to upload and download objects from Swift. However, I failed to see how to upload dynamic large object to Swift.
To upload dynamic large object, I need to upload all segments first, which I can do as usual. Then I need to upload a manifest object to combine them logically. The problem is to tell Swift this is a manifest object, I need to set a special header, which I don't know how to do that using JClouds api.
Here's a dynamic large object example from openstack official website.
The code I'm using:
public static void main(String[] args) throws IOException {
BlobStore blobStore = ContextBuilder.newBuilder("swift").endpoint("http://localhost:8080/auth/v1.0")
.credentials("test:test", "test").buildView(BlobStoreContext.class).getBlobStore();
blobStore.createContainerInLocation(null, "container");
ByteSource segment1 = ByteSource.wrap("foo".getBytes(Charsets.UTF_8));
Blob seg1Blob = blobStore.blobBuilder("/foo/bar/1").payload(segment1).contentLength(segment1.size()).build();
System.out.println(blobStore.putBlob("container", seg1Blob));
ByteSource segment2 = ByteSource.wrap("bar".getBytes(Charsets.UTF_8));
Blob seg2Blob = blobStore.blobBuilder("/foo/bar/2").payload(segment2).contentLength(segment2.size()).build();
System.out.println(blobStore.putBlob("container", seg2Blob));
ByteSource manifest = ByteSource.wrap("".getBytes(Charsets.UTF_8));
// TODO: set manifest header here
Blob manifestBlob = blobStore.blobBuilder("/foo/bar").payload(manifest).contentLength(manifest.size()).build();
System.out.println(blobStore.putBlob("container", manifestBlob));
Blob dloBlob = blobStore.getBlob("container", "/foo/bar");
InputStream input = dloBlob.getPayload().openStream();
while (true) {
int i = input.read();
if (i < 0) {
break;
}
System.out.print((char) i); // should print "foobar"
}
}
The "TODO" part is my problem.
Edited:
I've been pointed out that Jclouds handles large file upload automatically, which is not so useful in our case. In fact, we do not know how large the file will be or when the next segment will arrive at the time we start to upload the first segment. Our api is designed to make client able to upload their files in chunks of their own chosen size and at their own chosen time, and when done, call a 'commit' to make these chunks as a file. So this makes us want to upload the manifest on our own here.
According to #Everett Toews's answer, I've got my code correctly running:
public static void main(String[] args) throws IOException {
CommonSwiftClient swift = ContextBuilder.newBuilder("swift").endpoint("http://localhost:8080/auth/v1.0")
.credentials("test:test", "test").buildApi(CommonSwiftClient.class);
SwiftObject segment1 = swift.newSwiftObject();
segment1.getInfo().setName("foo/bar/1");
segment1.setPayload("foo");
swift.putObject("container", segment1);
SwiftObject segment2 = swift.newSwiftObject();
segment2.getInfo().setName("foo/bar/2");
segment2.setPayload("bar");
swift.putObject("container", segment2);
swift.putObjectManifest("container", "foo/bar2");
SwiftObject dlo = swift.getObject("container", "foo/bar", GetOptions.NONE);
InputStream input = dlo.getPayload().openStream();
while (true) {
int i = input.read();
if (i < 0) {
break;
}
System.out.print((char) i);
}
}
jclouds handles writing the manifest for you. Here are a couple of examples that might help you, UploadLargeObject and largeblob.MainApp.
Try using
Map<String, String> manifestMetadata = ImmutableMap.of(
"X-Object-Manifest", "<container>/<prefix>");
BlobBuilder.userMetadata(manifestMetadata)
If that doesn't work you might have to use the CommonSwiftClient like in CrossOriginResourceSharingContainer.java.

Connecting to PC to view shared folders from Android device

I am working on a samba client for Android. Given an IP address it should connect to it and browse the shared folders.
For this I use JCIFS. I dropped the jar in my Android project and added following code to connect to PC and get the list of files:
private void connectToPC() throws IOException {
String ip = "x.x.x.x";
String user = Constants.username + ":" + Constants.password;
String url = "smb://" + ip;
NtlmPasswordAuthentication auth = new NtlmPasswordAuthentication(user);
SmbFile root= new SmbFile(url, auth);
String[] files = root.list();
for (String fileName : files) {
Log.d("GREC", "File: " + fileName);
}
}
And I get in return: jcifs.smb.SmbAuthException: Logon failure: unknown user name or bad password.
But the credentials are correct. I also tried with another samba client from the android market that uses JCIFS and it successfully connected to that ip, so obviously I am doing something wrong here but don't know what especially.
Any help is highly appreciated.
In the end I managed successfully to connect to PC. The issue turned out to be in the NtlmPasswordAuthentication(); constructor.
So, instead of this:
String user = Constants.username + ":" + Constants.password;
NtlmPasswordAuthentication auth = new NtlmPasswordAuthentication(user);
I changed to this:
NtlmPasswordAuthentication auth = new NtlmPasswordAuthentication("",
Constants.username, Constants.password);
I don't know why, perhaps it's because of ":" special character, perhaps because of Android, but passing an empty domain name, the user name, and password separately to the constructor, solved the issue.
Since some people will get to this topic if they got a similar problem with android and JCIFS,
these are other common problems when trying to make it work:
*Put the .jar specifically in /libs folder of your android project (not just via "build path")
*Be sure that your project has internet permission What permission do I need to access Internet from an android application?
*Also be sure that your JCIFS code is running in a separate thread from the UI (in other words, use AsyncTask class) how to use method in AsyncTask in android?
*Code:
protected String doInBackground(String... params) {
SmbFile[] domains;
String username = USERNAME;
String password = PASSWORD;
NtlmPasswordAuthentication auth = new NtlmPasswordAuthentication("",
username, password);
try {
SmbFile sm = new SmbFile(SMB_URL, auth);
domains = sm.listFiles();
for (int i = 0; i < domains.length; i++) {
SmbFile[] servers = domains[i].listFiles();
for (int j = 0; j < servers.length; j++) {
Log.w(" Files ", "\t"+servers[j]);
}
}
} catch (SmbException e) {
e.printStackTrace();
} catch (MalformedURLException e) {
e.printStackTrace();
}
return "";
}
these were the problems i encounter while trying to make work JCIFS on android, hope to help anyone, regards.
maybe i can help other people too.
I had the problem that i used thread.run() instead of thread.start() to execute the Smb-Code in a Runnable. I searched a lot of time for an answer but nothing fixed my problem.
But then a friend explained me the different between thread.run() and thread.start():
run(): Execute the Methode (for example the run() Methode of a Runnable) like a normal Method (synchronous)
start(): Start the Thread with the Runnable in an own task (asynchronous)
And for Smb you need a asynchronous Thread. Because of this you need to call thread.start()!
Maybe someone make the same mistake as i did.

Categories

Resources