I was searching for a way how to communicate between multiple tabs or windows in a browser (on the same domain, not CORS) without leaving traces. There were several solutions:
using the window object
postMessage
cookies
localStorage
The first is probably the worst solution - you need to open a window from your current window and then you can communicate only as long as you keep the windows open. If you reload the page in any of the windows, you most likely lost the communication.
The second approach, using postMessage, probably enables cross-origin communication, but it suffers the same problem as the first approach. You need to maintain a window object.
The third way, using cookies, store some data in the browser, which can effectively look like sending a message to all windows on the same domain, but the problem is that you can never know if all tabs read the "message" already or not before cleaning up. You have to implement some sort of timeout to read the cookie periodically. Furthermore you are limited by maximum cookie length, which is 4 KB.
The fourth solution, using localStorage, seemed to overcome the limitations of cookies, and it can be even listen-to using events. How to use it is described in the accepted answer.
You may better use BroadcastChannel for this purpose. See other answers below. Yet if you still prefer to use localstorage for communication between tabs, do it this way:
In order to get notified when a tab sends a message to other tabs, you simply need to bind on 'storage' event. In all tabs, do this:
$(window).on('storage', message_receive);
The function message_receive will be called every time you set any value of localStorage in any other tab. The event listener contains also the data newly set to localStorage, so you don't even need to parse localStorage object itself. This is very handy because you can reset the value just right after it was set, to effectively clean up any traces. Here are functions for messaging:
// use local storage for messaging. Set message in local storage and clear it right away
// This is a safe way how to communicate with other tabs while not leaving any traces
//
function message_broadcast(message)
{
localStorage.setItem('message',JSON.stringify(message));
localStorage.removeItem('message');
}
// receive message
//
function message_receive(ev)
{
if (ev.originalEvent.key!='message') return; // ignore other keys
var message=JSON.parse(ev.originalEvent.newValue);
if (!message) return; // ignore empty msg or msg reset
// here you act on messages.
// you can send objects like { 'command': 'doit', 'data': 'abcd' }
if (message.command == 'doit') alert(message.data);
// etc.
}
So now once your tabs bind on the onstorage event, and you have these two functions implemented, you can simply broadcast a message to other tabs calling, for example:
message_broadcast({'command':'reset'})
Remember that sending the exact same message twice will be propagated only once, so if you need to repeat messages, add some unique identifier to them, like
message_broadcast({'command':'reset', 'uid': (new Date).getTime()+Math.random()})
Also remember that the current tab which broadcasts the message doesn't actually receive it, only other tabs or windows on the same domain.
You may ask what happens if the user loads a different webpage or closes his tab just after the setItem() call before the removeItem(). Well, from my own testing the browser puts unloading on hold until the entire function message_broadcast() is finished. I tested to put some very long for() cycle in there and it still waited for the cycle to finish before closing. If the user kills the tab just in-between, then the browser won't have enough time to save the message to disk, thus this approach seems to me like safe way how to send messages without any traces.
There is a modern API dedicated for this purpose - Broadcast Channel
It is as easy as:
var bc = new BroadcastChannel('test_channel');
bc.postMessage('This is a test message.'); /* send */
bc.onmessage = function (ev) { console.log(ev); } /* receive */
There is no need for the message to be just a DOMString. Any kind of object can be sent.
Probably, apart from API cleanness, it is the main benefit of this API - no object stringification.
It is currently supported only in Chrome and Firefox, but you can find a polyfill that uses localStorage.
For those searching for a solution not based on jQuery, this is a plain JavaScript version of the solution provided by Thomas M:
window.addEventListener("storage", message_receive);
function message_broadcast(message) {
localStorage.setItem('message',JSON.stringify(message));
}
function message_receive(ev) {
if (ev.key == 'message') {
var message=JSON.parse(ev.newValue);
}
}
Checkout AcrossTabs - Easy communication between cross-origin browser tabs. It uses a combination of the postMessage and sessionStorage APIs to make communication much easier and reliable.
There are different approaches and each one has its own advantages and disadvantages. Let’s discuss each:
LocalStorage
Pros:
Web storage can be viewed simplistically as an improvement on cookies, providing much greater storage capacity. If you look at the Mozilla source code we can see that 5120 KB (5 MB which equals 2.5 million characters on Chrome) is the default storage size for an entire domain. This gives you considerably more space to work with than a typical 4 KB cookie.
The data is not sent back to the server for every HTTP request (HTML, images, JavaScript, CSS, etc.) - reducing the amount of traffic between client and server.
The data stored in localStorage persists until explicitly deleted. Changes made are saved and available for all current and future visits to the site.
Cons:
It works on same-origin policy. So, data stored will only be able available on the same origin.
Cookies
Pros:
Compared to others, there's nothing AFAIK.
Cons:
The 4 KB limit is for the entire cookie, including name, value, expiry date, etc. To support most browsers, keep the name under 4000 bytes, and the overall cookie size under 4093 bytes.
The data is sent back to the server for every HTTP request (HTML, images, JavaScript, CSS, etc.) - increasing the amount of traffic between client and server.
Typically, the following are allowed:
300 cookies in total
4096 bytes per cookie
20 cookies per domain
81920 bytes per domain (given 20 cookies of the maximum size 4096 = 81920 bytes.)
sessionStorage
Pros:
It is similar to localStorage.
Changes are only available per window (or tab in browsers like Chrome and Firefox). Changes made are saved and available for the current page, as well as future visits to the site on the same window. Once the window is closed, the storage is deleted
Cons:
The data is available only inside the window/tab in which it was set.
The data is not persistent, i.e., it will be lost once the window/tab is closed.
Like localStorage, tt works on same-origin policy. So, data stored will only be able available on the same origin.
PostMessage
Pros:
Safely enables cross-origin communication.
As a data point, the WebKit implementation (used by Safari and Chrome) doesn't currently enforce any limits (other than those imposed by running out of memory).
Cons:
Need to open a window from the current window and then can communicate only as long as you keep the windows open.
Security concerns - Sending strings via postMessage is that you will pick up other postMessage events published by other JavaScript plugins, so be sure to implement a targetOrigin and a sanity check for the data being passed on to the messages listener.
A combination of PostMessage + SessionStorage
Using postMessage to communicate between multiple tabs and at the same time using sessionStorage in all the newly opened tabs/windows to persist data being passed. Data will be persisted as long as the tabs/windows remain opened. So, even if the opener tab/window gets closed, the opened tabs/windows will have the entire data even after getting refreshed.
I have written a JavaScript library for this, named AcrossTabs which uses postMessage API to communicate between cross-origin tabs/windows and sessionStorage to persist the opened tabs/windows identity as long as they live.
I've created a library sysend.js for sending messages between browser tabs and windows. The library doesn't have any external dependencies.
You can use it for communication between tabs/windows in the same browser and domain. The library uses BroadcastChannel, if supported, or storage event from localStorage.
The API is very simple:
sysend.on('foo', function(data) {
console.log(data);
});
sysend.broadcast('foo', {message: 'Hello'});
sysend.broadcast('foo', "hello");
sysend.broadcast('foo', ["hello", "world"]);
sysend.broadcast('foo'); // empty notification
When your browser supports BroadcastChannel it sends a literal object (but it's in fact auto-serialized by the browser) and if not, it's serialized to JSON first and deserialized on another end.
The recent version also has a helper API to create a proxy for cross-domain communication (it requires a single HTML file on the target domain).
Here is a demo.
The new version also supports cross-domain communication, if you include a special proxy.html file on the target domain and call proxy function from the source domain:
sysend.proxy('https://target.com');
(proxy.html is a very simple HTML file, that only have one script tag with the library).
If you want two-way communication you need to do the same on other domains.
NOTE: If you will implement the same functionality using localStorage, there is an issue in Internet Explorer. The storage event is sent to the same window, which triggers the event and for other browsers, it's only invoked for other tabs/windows.
Another method that people should consider using is shared workers. I know it's a cutting-edge concept, but you can create a relay on a shared worker that is much faster than localstorage, and doesn't require a relationship between the parent/child window, as long as you're on the same origin.
See my answer here for some discussion I made about this.
There's a tiny open-source component to synchronise and communicate between tabs/windows of the same origin (disclaimer - I'm one of the contributors!) based around localStorage.
TabUtils.BroadcastMessageToAllTabs("eventName", eventDataString);
TabUtils.OnBroadcastMessage("eventName", function (eventDataString) {
DoSomething();
});
TabUtils.CallOnce("lockname", function () {
alert("I run only once across multiple tabs");
});
P.S.: I took the liberty to recommend it here since most of the "lock/mutex/sync" components fail on websocket connections when events happen almost simultaneously.
I wrote an article on this on my blog: Sharing sessionStorage data across browser tabs.
Using a library, I created storageManager. You can achieve this as follows:
storageManager.savePermanentData('data', 'key'): //saves permanent data
storageManager.saveSyncedSessionData('data', 'key'); //saves session data to all opened tabs
storageManager.saveSessionData('data', 'key'); //saves session data to current tab only
storageManager.getData('key'); //retrieves data
There are other convenient methods as well to handle other scenarios as well.
This is a development storage part of Tomas M's answer for Chrome. We must add a listener:
window.addEventListener("storage", (e)=> { console.log(e) } );
Load/save the item in storage will not fire this event - we must trigger it manually by
window.dispatchEvent( new Event('storage') ); // THIS IS IMPORTANT ON CHROME
And now, all open tabs will receive the event.
Related
There is a RequestMethod named PATCH.
To use this method, we can define #PatchMapping for a rest endpoint.
As per my understanding, it sounds like partially updating the DB object.
Generally, we use POST or PUT calls to perform save or update. So, still not clear what are exact use cases of PatchMapping and why can't I just use PUT instead of PATCH?
still not clear what are exact use cases of PatchMapping and why can't I just use PUT instead of PATCH?
PUT (defined by RFC 7231) and PATCH (defined by RFC 5789) are two different methods used for a similar purpose: to request that the server make its representation of a resource match the representation on the client.
Imagine, if you would, trying to update a web page provided by a server. The client first obtains a recent copy of the server's representation:
GET /foo
and then, using the client's favorite local HTML editor, makes changes to this private copy. When the client has finished making the changes, we want to send those changes back to the server to be used.
The straight forward way to do this in HTTP is to simply send the entire updated representation back to the server:
PUT /foo
<html>....</html>
When the representation is very large (compared with the HTTP headers), and the edits are very small (compared to the document), then PUT becomes a somewhat "expensive" way to achieve what ought to be a small thing.
To that end, we might also support PATCH, so that instead of sending the entire document, we just send a representation of the changes we made: a patch document.
When the server receives our patch, it loads its own copy of the document, applies the changes described by the patch document, and saves the result.
Thus: the overall use case is the same: remote authoring. You load a representation of a resource into your HTTP aware document editor, make a few changes, and hit "save", and your editor knows what to do to communicate your edits back to the server.
There's a REST endpoint, which serves large (tens of gigabytes) chunks of data to my application.
Application processes the data in it's own pace, and as incoming data volumes grow, I'm starting to hit REST endpoint timeout.
Meaning, processing speed is less then network throughoutput.
Unfortunately, there's no way to raise processing speed enough, as there's no "enough" - incoming data volumes may grow indefinitely.
I'm thinking of a way to store incoming data locally before processing, in order to release REST endpoint connection before timeout occurs.
What I've came up so far, is downloading incoming data to a temporary file and reading (processing) said file simultaneously using OutputStream/InputStream.
Sort of buffering, using a file.
This brings it's own problems:
what if processing speed becomes faster then downloading speed for
some time and I get EOF?
file parser operates with
ObjectInputStream and it behaves weird in cases of empty file/EOF
and so on
Are there conventional ways to do such a thing?
Are there alternative solutions?
Please provide some guidance.
Upd:
I'd like to point out: http server is out of my control.
Consider it to be a vendor data provider. They have many consumers and refuse to alter anything for just one.
Looks like we're the only ones to use all of their data, as our client app processing speed is far greater than their sample client performance metrics. Still, we can not match our app performance with network throughoutput.
Server does not support http range requests or pagination.
There's no way to divide data in chunks to load, as there's no filtering attribute to guarantee that every chunk will be small enough.
Shortly: we can download all the data in a given time before timeout occurs, but can not process it.
Having an adapter between inputstream and outpustream, to pefrorm as a blocking queue, will help a ton.
You're using something like new ObjectInputStream(new FileInputStream(..._) and the solution for EOF could be wrapping the FileInputStream first in an WriterAwareStream which would block when hitting EOF as long a the writer is writing.
Anyway, in case latency don't matter much, I would not bother start processing before the download finished. Oftentimes, there isn't much you can do with an incomplete list of objects.
Maybe some memory-mapped-file-based queue like Chronicle-Queue may help you. It's faster than dealing with files directly and may be even simpler to use.
You could also implement a HugeBufferingInputStream internally using a queue, which reads from its input stream, and, in case it has a lot of data, it spits them out to disk. This may be a nice abstraction, completely hiding the buffering.
There's also FileBackedOutputStream in Guava, automatically switching from using memory to using a file when getting big, but I'm afraid, it's optimized for small sizes (with tens of gigabytes expected, there's no point of trying to use memory).
Are there alternative solutions?
If your consumer (the http client) is having trouble keeping up with the stream of data, you might want to look at a design where the client manages its own work in progress, pulling data from the server on demand.
RFC 7233 describes the Range Requests
devices with limited local storage might benefit from being able to request only a subset of a larger representation, such as a single page of a very large document, or the dimensions of an embedded image
HTTP Range requests on the MDN Web Docs site might be a more approachable introduction.
This is the sort of thing that queueing servers are made for. RabbitMQ, Kafka, Kinesis, any of those. Perhaps KStream would work. With everything you get from the HTTP server (given your constraint that it cannot be broken up into units of work), you could partition it into chunks of bytes of some reasonable size, maybe 1024kB. Your application would push/publish those records/messages to the topic/queue. They would all share some common series ID so you know which chunks match up, and each would need to carry an ordinal so they can be put back together in the right order; with a single Kafka partition you could probably rely upon offsets. You might publish a final record for that series with a "done" flag that would act as an EOF for whatever is consuming it. Of course, you'd send an HTTP response as soon as all the data is queued, though it may not necessarily be processed yet.
not sure if this would help in your case because you haven't mentioned what structure & format the data are coming to you in, however, i'll assume a beautifully normalised, deeply nested hierarchical xml (ie. pretty much the worst case for streaming, right? ... pega bix?)
i propose a partial solution that could allow you to sidestep the limitation of your not being able to control how your client interacts with the http data server -
deploy your own webserver, in whatever contemporary tech you please (which you do control) - your local server will sit in front of your locally cached copy of the data
periodically download the output of the webservice using a built-in http querying library, a commnd-line util such as aria2c curl wget et. al, an etl (or whatever you please) directly onto a local device-backed .xml file - this happens as often as it needs to
point your rest client to your own-hosted 127.0.0.1/modern_gigabyte_large/get... 'smart' server, instead of the old api.vendor.com/last_tested_on_megabytes/get... server
some thoughts:
you might need to refactor your data model to indicate that the xml webservice data that you and your clients are consuming was dated at the last successful run^ (ie. update this date when the next ingest process completes)
it would be theoretically possible for you to transform the underlying xml on the way through to better yield records in a streaming fashion to your webservice client (if you're not already doing this) but this would take effort - i could discuss this more if a sample of the data structure was provided
all of this work can run in parallel to your existing application, which continues on your last version of the successfully processed 'old data' until the next version 'new data' are available
^
in trade you will now need to manage a 'sliding window' of data files, where each 'result' is a specific instance of your app downloading the webservice data and storing it on disc, then successfully ingesting it into your model:
last (two?) good result(s) compressed (in my experience, gigabytes of xml packs down a helluva lot)
next pending/ provisional result while you're streaming to disc/ doing an integrity check/ ingesting data - (this becomes the current 'good' result, and the last 'good' result becomes the 'previous good' result)
if we assume that you're ingesting into a relational db, the current (and maybe previous) tables with the webservice data loaded into your app, and the next pending table
switching these around becomes a metadata operation, but now your database must store at least webservice data x2 (or x3 - whatever fits in your limitations)
... yes you don't need to do this, but you'll wish you did after something goes wrong :)
Looks like we're the only ones to use all of their data
this implies that there is some way for you to partition or limit the webservice feed - how are the other clients discriminating so as not to receive the full monty?
You can use in-memory caching techniques OR you can use Java 8 streams. Please see the following link for more info:
https://www.conductor.com/nightlight/using-java-8-streams-to-process-large-amounts-of-data/
Camel could maybe help you the regulate the network load between the REST producer and producer ?
You might for instance introduce a Camel endpoint acting as a proxy in front of the real REST endpoint, apply some throttling policy, before forwarding to the real endpoint:
from("http://localhost:8081/mywebserviceproxy")
.throttle(...)
.to("http://myserver.com:8080/myrealwebservice);
http://camel.apache.org/throttler.html
http://camel.apache.org/route-throttling-example.html
My 2 cents,
Bernard.
If you have enough memory, Maybe you can use in-memory data store like Redis.
When you get data from your Rest endpoint you can save your data into Redis list (or any other data structure which is appropriate for you).
Your consumer will consume data from the list.
I have written a Java program, which reads numbers from different files. The numbers are added while being read from the files and the sum is displayed in a browser. The browser keeps on displaying the new sum getting created at every step.
I know how to display static values in a browser. I can use Javascripts. But I don't know what mechanism to use to display continuously a changing value.
Any help is appreciated!
You'll have to request the data to display from the server. You can use a data-binding library like Knockout to automatically update the page as the underlying model changes, or you can just use a library like jquery to modify the DOM on your own.
Alternatively, you could keep a pipe open to the server using the Comet model: http://en.wikipedia.org/wiki/Comet_%28programming%29. However, it can be expensive to eat up a thread for long periods of time on your web server.
Good luck.
Check out knockout.js http://www.knockoutjs.com/ it is a framework for updating UI automatically when data changes
If user refresh the page continuously using F5 functional key then the page loading is very slow and can be seen blank page for long time.
How to solve this problem?
I tried using cache on server side but I don't think that I am using it in proper way.
Can somebody help me with an example.
I think you need to use browser cache, which can be controlled by http headers, or meta tags.
http://www.mnot.net/cache_docs/
You need to set page cache to be around 5 seconds or some similar value so that no new request will be sent to server in that time interval.
A few things:
You could try to minimize processing time within your application, maybe by minimizing wasteful operations. Sounds like your application spends a lot of time recreating the output.
You could try to add some sort of caching on the server side, and and send the user the same page (ie no "new" processing) for some time. Depending on the mechanism, this may not be feasible though (https, security?). At least, afaik.
Of course you could change the way the site works. You could use Ajax to push information to the site the user is on, and so take the urge to refresh away from him.
And maybe your server just does not have enough power to serve a lot of users at the same time?
It is very difficult to stop user from pressing F5.
Try making your code more optimized.
Use meta tags for cache like:
cache-control
EXPIRES
PRAGMA NO-CACHE
Also check this for JSP caching.
response.setIntHeader("Refresh",5);
just use this jsp method for autorefreeshing of ur webpage...
http://www.tutorialspoint.com/jsp/jsp_auto_refresh.htm
There is this site wich in the address bar only shows like "http://example.com/examplepage.aspx".
Normally if it would have parameters behind it you probably could just copy that one.
But since it doesn't, how do i bookmark this page.
It doesn't necessarily have to be a bookmark, but at least an easy way to access the page.
(fyi I know basic HTML and Java, maybe it's only possible programmatically).
thnx
Generally dynamic pages (taking in context with the question) are not book mark friendly.
You could probably sniff the incoming request, and create a fake form which you can then submit later.
However there may be situations where there are parameters such as session id which are valid for only small periods of time.
You should read up on sessions. In really simple terms, a session is assigned to users accessing a website. They have an expiry period. IF you stay idle beyond set time (determined by the developer) you will not be able to get in. And every time you log back in, you may be assign a new session.
You would have noticed, that some websites automatically log you in, this is mostly done with the help of cookies. Cookies work in tandem with sessions, they store very basic information, so the next time you come back to a website, it will be able to identify you as a returning user and provide you with access.
Then again, some pages don't use sessions, they might have their own custom way of identifying users.
Bookmarks can be used in dynamic pages, if the code allows you to send GET requests, if they don't have any other extra parameters which will block you.
To Summarize:
Dynamic page not very bookmark friendly.
There may be parameters used to access a webpage which change constantly, which you cannot really save.
You may be able to get into dynamic pages using bookmarks, if they don't use any of the dynamically changing parameters.
Since you know Java, you should probably read up on JSPs/servlets to get an understanding of what happens behind the scenes in dynamic pages.
Hope this answers your questions.