As I talk about befor I'm using to jQuery to refresh / update a webcam image.
This works just fine if you wanna update the image every 5th or 10sec.
But when your gonna do a stream with 10-15fps it gets into problems with most browsers
it seems. The problem seem to be that it sends a request befor the first one was done.
Is there a way to wait for the first request to be done befor sending a new update request for the webcam image? Because to me it seems to stack up requests if there is alittle delay on the server with the image.
Sorry if I did explain it alittle bad but... I'm norwegian and blode. Not the best combination. :)
Webcam Image is a single url
ex. http://www.ohoynothere.com/image.jpg
Old code I use.
$(document).ready(function() {
setInterval('updateCamera()',3000);
});
function updateCamera() {
$('.online2').each(function() {
var url = $(this).attr('src').split('&')[0];
$(this).attr('src', url + '&rand=' + new Date().getTime());
})
}
Definitely!
It sounds like your best bet would be to use the jQuery.ajax() method ( http://api.jquery.com/jQuery.ajax/ ) or .get() method to chain your requests. Basically, you want a JavaScript function that does a request for the image using the .ajax() call. In the response handler, simply call the function again:
function getMyImage() {
jQuery.get(image_url, function(response) {
jQuery('#img-name').attr('src', response);
getMyImage();
});
}
Whenever getMyImage successfully returns the image's src value from the webcam, it will immediately go out and try to retrieve a new image, but not before the previous one is loaded.
If I haven't understood what you're trying to do, please let me know. It would be helpful to know more about how the webcam image is retrieved (i.e. is it the same image src returned every time, etc.).
Related
I am using the wicket framework.
I have a requirement to send to the client browser several individual files (a zip file is not relevant).
I have added to my page an AJAXDownload class that extends AbstractAjaxBehavior - a solution for sending files to the client like this:
download = new AJAXDownload(){
#Override
protected IResourceStream getResourceStream(){
return new FileResourceStream(file){
#Override
public void close() throws IOException {
super.close();
file.delete();
}
};
}};
add(download);
At some other point in my code I am trying to initiate the download of several files to the client using an ajax request whilst looping through an arraylist of files and then each time triggering the AJAXDownload:
ArrayList<File> labelList = printLabels();
for(int i=0; i<labelList.size(); i++){
file = labelList.get(i);
//initiate the download
download.initiate(target);
}
However, it is only sending just one of these files to the client. I have checked and the files have definitely been created on the server side. But only one is of them is being sent to the client.
Can anyone give me an idea what I am doing wrong?
Thanks
You are doing everything correct!
I don't know how to solve your problem but I'll try to explain what happens so someone else could help:
The Ajax response has several entries like:
<evaluate>document.location=/some/path/to/a/file</evaluate>
wicket-ajax.js just loops over the evaluations and executes them. If there is one entry then everything is OK - you have the file downloaded. But if there are more then the browser receives several requests for changing its location in very short time. Apparently it drops all but one of them.
An obvious solution would be to use callbacks/promises - when a download finishes then trigger the next one. The problem is that there is no way how to receive a notification from the browser that such download finished. Or at least I don't know about it.
One can roll a solution based on timeouts (i.e. setTimeout) but it would be error prone.
I hope this information is sufficient for someone else to give you the solution!
I have a portlet. When the portlet loads, then before the first view is rendered, in some cases there is a need to call a repository which changes data in the database. I wouldn't go into more detail about why this is necessary and answers about this being a design flaw are not helpful. I am aware that it is a design flaw but I would still like to find out an alternative solution to the following problem:
The problem with this set-up is, that browsers send preloading requests. For example the URL of the page where the portlet resides is /test-portlet. Now when you type it in your address-bar then if you have it in your browser history, then the browser sends a GET request to the page already when it suggests it to you. If you press enter before the first GET request is resolved, then the browser sends a new GET request. This means that the portlet receives 2 separate requests which it starts to process parallelly. The first database procedure might work correctly but considering the nature of the database procedure, the second call usually gives an exception.
What would be a nice clean way to deal with the aforementioned problem from the Java application?
Sidenote: I am using Spring MVC.
A simple example of a possible controller:
#RequestMapping
public String index( Model model, RenderRequest request ){
String username = dummyRepository.changeSomeData(request.getAttribute("userId"));
model.add("userName", username);
return "view";
}
I would be interested in a solution to block the first execution altogether. For example somekind of a redirect to POST from controller which the browser wouldn't trigger. Not sure if it is achievable though.
Using locks I think you could solve it, making the secound request wait for the first to finish and then processing it. I don't have experience with locks in java but i found another stack exchange post about file locks in jave:
How can I lock a file using java (if possible)
Please refer to this answer, it might help you to detect and ignore some preloading requests. However you should also make sure the 'worst case' works, perhaps using the locking as suggested by #jpeg, but it could be as easy as using a synchronize block somewhere.
Since I don't see that chrome adds some specific header (or anyhow notifies the server about prerendering state) it is probably not possible to detect it on the server side... at least not directly. You can however simulate the detection on client side and later combine it with server call.
Notice that you can detect prerendering on the client side:
if (document.webkitVisibilityState == 'prerender' || document.visibilityState == 'prerender' || document.visibilityState[0] == 'prerender') {
// prerendering takes place
}
Now, you can break preloading on client side by showing alert box in case browser is in preloading state (or you can probably do the same with just some error in javascript, instead of using alert()):
if (document.webkitVisibilityState == 'prerender' || document.visibilityState == 'prerender' || document.visibilityState[0] == 'prerender') {
alert('this is alert during prerendering..')
}
Now when chrome prerenders the page it will fail because the javascript alert will prevent the browser to continue executing javascript.
If you type in chrome: chrome://net-internals/#prerender you can track when and for which pages chrome executes prerendering. In case of above example (with alert box during prerendering) you can see there:
Link Rel Prerender (cross
domain) http://some.url.which.is.preloaded Javascript
Alert 2015-06-07 19:26:18.758
The final state - Javascript Alret proves that chrome failed to preload the page (I have tested this).
Now how can this solve your issue? Well, you can combine this with asynchronous call (AJAX) and load some content (from another url) depending on wheater the page is actually prerendering or not.
Consider following code (which might be rendered by your portlet under url /test-portlet):
<html>
<body>
<div id="content"></div>
<script>
if (document.webkitVisibilityState == 'prerender' || document.visibilityState == 'prerender' || document.visibilityState[0] == 'prerender') {
// when chrome uses prerendering we block the request with alert
alert('this is alert during prerendering..');
} else {
// in case no prerendering takes place we load the actual content asynchronously
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function() {
if (xhr.readyState == 4) {
// when the content is loaded we place the html inside "content" div
document.getElementById('content').innerHTML = xhr.responseText;
}
}
xhr.open('GET', '/hidden-portlet', true); // we call the actual portlet
xhr.send(null);
}
</script>
</body>
</html>
As you see the /hidden-portlet is only loaded in case browser is loading the page normally (without preloading). The server side handler under url /hidden-portlet (which can be another portlet/servlet) contains actual code which should not be executed during prerendering. So it is the /hidden-portlet which executes
dummyRepository.changeSomeData(request.getAttribute("userId"));
This portlet can also return normal view (rendered html) which will be asynchronously placed on the page under url /test-portlet thanks to the trick on /test-portlet: document.getElementById('content').innerHTML = xhr.responseText;.
So to sumarize the portlet under address /test-portlet only returns html with a javascript code which triggers actual portlet.
If you have many fragile portlets, you can go with this even further, so you can parametrize you /test-portlet with request parameter like /test-portlet?actualUrl=hidden-portlet so that address of the actual portlet is taken from url (which can be read as request parameter on server side). Server will in this case dynamically render the url which should be loaded:
So instead of hardcoded:
xhr.open('GET', '/hidden-portlet', true);
you will have
xhr.open('GET', '/THIS_IS_DYNAMICALLY_REPLACED_EITHER_ON_SERVER_OR_CLIENT_SIDE_WITH_THE_ADDRES_FROM_URL', true);
i was trying to call the following web service from my android app, it hung then completed without returning the result:
web service:http://androidexample.com/media/webservice/JsonReturn.php
However when I clicked on the link, it worked fine - the json file displayed. yet it would not work in my app..
but now, it works fine now in my android app, perhaps it was temporarily down is what I am guessing. How can I know if a web service is up and running for an android app to consume ?
Typically, web services are designed to have a status page that can return status text or a HTTP return code to indicate service status.
If it doesn't have that, you can design a function to periodically do a very basic request with a known result to determine state. This is much better than doing a simple ping.
If it was down it would most likely show a HTML error page, which your app would try to parse, which would cause an error.
I had a similar issue, because I needed to know if the user was returning HTML or the correct JSON, to do this I created the ArrayList I was about to use outside of the try/catch of the parse area. You should do the same if you are using a string.
What I mean is, use:
ArrayList<Something> arrayList = new ArrayList<Something>();
String testString = ""; instead of String testString = null;
I was using only ArrayList<Something> arrayList; at one point which is incorrect. If the user then returns HTML, you won't get an error, the user will simply return an empty arraylist or empty string.
You can then plan for that and show some sort of error message. This way you only need one network request but you can still plan for getting the data back, and the server being down.
I need to upload multiple files from jsp. I am using $ajaxFileUPload.js to take the file to server side. I am doing my file size validation in server side for each file. I need a message on validating the file, where i face a problem. I am not able to show that message. Could someone help me in this please?
I have not used the plugin but what I have done previously in similar situation is send different markers back to the client side like for an upload the exceeds the file limit size, you can start the response back with something like 'ERROR:' and then look for this marker in the function getting the response back and then branch to a different logic. You obviously have to parse the response and look for the marker.
Looking quickly at the plugin in Github, it looks like the usage is
$('input[type="file"]').ajaxfileupload({
'action': '/upload.php',
'params': {
'extra': 'info'
},
'onComplete': function(response) {
console.log('custom handler for file:');
alert(JSON.stringify(response));
},
'onStart': function() {
if(weWantedTo) return false; // cancels upload
},
'onCancel': function() {
console.log('no file selected');
}
});
So what I think you can do is in the onComplete function something like
if (response.search("ERROR:") != -1){
//error condition
//add your msg for the front end here
} else {
//non error condition, continue with your regular flow
}
Does this make sense and relate to what you are trying to do?
I'm making a web application for blackberry and I really need the current URL
In the description of documentUrl, it says
This method will return the URL of the currently loaded page of this BrowserField Instance
My code is:
_bf2.requestContent("google.com";);
add(_bf2);
Global.c = _bf2.getDocumentUrl();
Global.be=new BasicEditField("URL: "+Global.c,Global.c);
add(Global.be);
and the weird thing is that www.google.com gets loaded in the BrowserField and the documentUrl returns null.
This is my current code:
BrowserField _bf2 = new BrowserField();
MYBrowserFieldListener _listener = new MYBrowserFieldListener();
_bf2.requestContent("google.com";);
_bf2.addListener(_listener);
String url=_bf2.getDocumentUrl();
Global.be=new BasicEditField("URL: "+url,url);
add(Global.be);
add(_bf2);
I changed it to
final BrowserField _bf2 = new BrowserField();
_bf2.requestContent("google.com";);
//_bf2.addListener(listener);
Global.be=new BasicEditField("URL: "+Global.c,Global.c);
add(Global.be);
add(_bf2);
_bf2.addListener(new BrowserFieldListener(){
public void documentLoaded(BrowserField _bf2, Document document) throws Exception {
Global.c=_bf2.getDocumentUrl();
}
});
But it still returns null. Can someone please tell me how to fix this? Thanks in advance!
I would say that Arhimed has answered your question. An HTTP request is a very time consuming process (from a CPU perspective) and will block until the server responds. I suspect that RIM programmers have coded the requestContent() method as per their own recommendations and are fetching the web content on a separate thread. So, requestContent() will return immediately, when you call getDocumentUrl() it is still null since the fetch thread has probably not even connected to the server at this point.
You will need to implement a BrowserFieldListener and listen for documentLoaded().