Flex front-end using AMF to Java back.
Trying to read, in real-time, a file that is being written to. For example, a log.
I'm using Java's RandomAccessFile class to read the "new" lines of the file and send them back to the UI as a byte array along with the byte offset to start reading from next time.
Using an mx:List to display all lines of the text file.
The problem I'm running into, is Flex, or Flash Player, running out of memory on mildly large files, >25MB.
Is there any preferred method of displaying large amounts of text data in Flex that I'm missing? Or does Flex/Flash just handle this poorly and I'm basically screwed?
Thanks.
If 25MB is only mildly large then I'd say you probably need to page the data into the component, and simple store just a couple of pages in memory at a time. I'd probably pick something like TextArea over List, but creating seamless scrolling for a TextArea could be difficult if you don't have all the data which sounds like what you'll have to do. So stick with List for now, and figure out how many lines you want to make a page. And implement your backend as a method like:
// service call interface
public Page getPage( int lineStart, int lines );
// response object from the service call
public class Page {
private var _totalLines:int;
private var _lineStart:int;
private var _lineEnd:int;
[ArrayElementType("String")]
private lines:ArrayCollection;
}
Then you can load a page and store X number of pages in memory, but use the totalLines in the file to know how big your model is so the scrollbar can render properly. You'll just need to build a paging dataprovider that loads pages not yet loaded, and ditches pages if they aren't displayed or pages that are furthest from what's being displayed.
Right now I'm working on component that will be able to show up to 100Mb of text fast. You can just replace your mx:TextArea with LongTextArea:
<longText:LongTextArea text="{...}"/>
Download LongTextArea SWC.
Related
I'm building a website for a friend who's writing a novel, and want to display it, chapter by chapter, in a book-like display, with pages turning.
I have a frontend app in Angular 2 and a backend in Java (as they're the tools I'm more familiar with). A backoffice on the Angular app allows the user to add the text of a chapter, which is sent to the backend to be stored in the DB. Then the front of the Angular calls the backend to retrieve the chapter, and has to display it in the book-like display.
My problem is how can I split the text of the chapter into pages in order to display it. I could change the backoffice to force the user to add the text page by page. I could ask the user to put a specific marker in the text to indicate a page break. But I'ld like the process to be as transparent as possible for the user.
So I went for a solution by splitting the text on the backend. I estimated how many characters are on a line, and how many lines are on a page, then I cut the text accordingly (with some adjustments, as it's a HTML text with tags in it).
But it feels like a very strict approach, as I'm choosing the size of a page, regardless of the display interface size.
So I'm wondering if there is a better approach :
- a different splitting algorithm
- a tool front-side to display my text without splitting it
- something else
Does anyone had to face a similar problem ?
Thanks
You are performing that action on server side that has no sense of the page length.
I assume that a better approach shall be to get the complete chapter from backend to front end; and have a front end function that will calculate :
- the number of characters per lines based on page size
- the number for line based on page size
- the number of chapter pages based on previous info
This is a way better approach than your full backend ones.
However; this is not a responsive approach.
Do you have interest and need within a responsive one ?
If yes; you may add a watch on the page length/height to recalculate the above values and re generated your pages
I am having very huge data. That I want to show in browser (Web app), But while loading browser is crashing.
Basically Its a grid that has row and column header and rest of the grid is having check boxes, that user will click based on row and column header values.
when I am loading this data its going up to 2gb and total data will be 5gb. any one can help me how can I show this much data in browser or in any type of app(windows or web app) or any type of technology.
Please try the pagination technique to show the data in part.
So it will reduce the load on page but will increase the database hit.
One class that may help you is the GZIPInputStream and GZIPOutputStream. It sounds like you're using text data so compressing what you're sending should drastically cut down the amount of data that the client has to download.
I have written a Java program, which reads numbers from different files. The numbers are added while being read from the files and the sum is displayed in a browser. The browser keeps on displaying the new sum getting created at every step.
I know how to display static values in a browser. I can use Javascripts. But I don't know what mechanism to use to display continuously a changing value.
Any help is appreciated!
You'll have to request the data to display from the server. You can use a data-binding library like Knockout to automatically update the page as the underlying model changes, or you can just use a library like jquery to modify the DOM on your own.
Alternatively, you could keep a pipe open to the server using the Comet model: http://en.wikipedia.org/wiki/Comet_%28programming%29. However, it can be expensive to eat up a thread for long periods of time on your web server.
Good luck.
Check out knockout.js http://www.knockoutjs.com/ it is a framework for updating UI automatically when data changes
I'm creating a Web-based label printing system. For every label, there should be a unique s/n. So when a user decided to create 1000 labels (with the same data), all of it should have unique s/n, therefore the pdf will have 1000 pages, which increases the file size.
My problem is when the user decided to create more copies, the file size will get bigger.
Is there any way that I can reduce the file size of the pdf using Itext? Or is there any way that I can generated the pdf and output it in the browser without saving it neither to server/client's HDD?
Thanks for the help!
On approach is to compress the file. It should be highly compressible.
(I imagine that you should be able to generate the PDF on the server side without writing it to disc, though you could use a lot of memory / Java heap in the process. I don't think it is possible to deliver a PDF to the browser without the file going to the client PC's hard drive in some form.)
If everything except the s/n is the same for the thousands of labels, you only have to add the equal things one time as a template and put the s/n text on top of it.
Take a look at PDFTemplate in itext. If I recall correctly that creates and XObject for the recurring drawing/label/image.... and it is exactly the same object every time you use it.
Even with thousands of labels, the only thing that grows your document size is the s/n (and every page) but the graphics or text of the 'label' is only added once. That should reduce your file size.
We need to load and display large files (rich text) using swing, about 50mb. The problem is that the performance to render the files is incredibly poor. We tried both JTextPane and JEditorPane with no luck.
Does someone have experience with this and could give me some advise ?
thanks,
I don't have any experience in this but if you really need to load big files I suggest you do some kind of lazy loading with JTextPane/JEditorPane.
Define a limit that JTextPane/JEditorPane can handle well (like 500KB or 1MB). You'll only need to load a chunk of the file into the control with this size.
Start by loading the 1st partition of the file.
Then you need to interact with the scroll container and see if it has reached the end/beginning of the current chunk of the file. If so, show a nice waiting cursor and load the previous/next chunk to memory and into the text control.
The loading chunk is calculated from your current cursor position in the file (offset).
loading chunk = offset - limit/2 to offset + limit/2
The text on the JTextPane/JEditorPane must not change when loading chunks or else the user feels like is in another position of the file.
This is not a trivial solution but if you don't find any other 3rd party control to do this I would go this way.
You could use Memory Mapped File I/O to create a 'window' into the file and let the operating system handle the reading of the file.
Writing an efficient WYSIWYG text editor that can handle large documents is a pretty hard problem--Even Word has problems when you get into large books.
Swing is general purpose, but you have to build up a toolset around it involving managing documents separately and paging them.
You might look at Open Office, you can embed an OO document editor screen right into your app. I believe it's called OOBean...
JTextPane/JEditorPane do not handle well even 1mb of text (especially text with long lines).
You can try JEdit (StandaloneTextArea) - it is much faster than Swing text components, but I doubt it will handle this much text. I tried with 45m file, and while it was loaded (~25 seconds) and I could scroll down, I started getting "outofmemory" with 1700m heap.
In order to build a really scalable solution there are two obvious options really:
Use pagination. You can do just fine with standard Swing by displaying text in pages.
Build a custom text renderer. It can be as simple as a scrollable pane where only the visible part is drawn using BufferedReader to skip to the desired line in the file and read a limited number of lines to display. I did it before and it is a workable solution. If you need to have 'text selection' capabilities, this is a little more work, of course.
For really large files you could build an index file that contains offsets of each line in characters, so getting the "offset" is a quick "RandomAccess" lookup by line number, and reading the text is a "skip" with this offset. Very large files can be viewed with this technique.