I have been searching for this.
looked at this question, but looks like volo3 is discontinued, so I downloaded the DWG trueView.
then in a jsp file I have:
<EMBED SRC="randomDwg.dwg" WIDTH=800 HEIGHT=500>
in both firefox and IE keedp showing plugin required
how can I embed a dwg file in a web page just like PDF files? (doesn't matter if a plugin is required)
Since just about everyone has PDF why not convert the dwg to pdf. Since pdf is vector you shouldn't lose any resolution and should still be able to zoom in. http://anydwg.com/dwg2pdf/ might do it for you.
Related
I know it's possible to convert an HTML file to PDF using Google Drive (HTML2PDF using Google Drive API) but I'd like to know if this HTML has images and CSS files is possible and how to do that.
You need convert HTML to a Docs file and export it as PDF. During the docs conversion most of the non-trivial styles are being trimmed. Basic coloring, sizing and positioning will all you'll get. The exported PDF is the Docs' file's PDF version. Images will be preserved though.
You can make experiments by uploading your html files to Google Drive on drive.google.com with conversion settings on and see the results.
For images you could try this: Embedding Base64 Images
Worked for me when uploading by web. Should work with my solution https://stackoverflow.com/a/21711109/592042
Css can be written right into html file.
I am developing a Java project in which i have a sub-module where i need to extract contents [text, image, color] from a webpage and compare it with another webpage. I am planning to use WinHTTrack software for downloading the webpage locally, but the problem is it doesn't save it as HTML. How can i download a webpage with HTML extension using softwares such as WinHTTrack [or just saving the webpage through ctrl+s is enogh.?]. Also i am planning to use HTML Parsers to extract the 3 content types[text, image, color],after downloading the webpage locally. So which parser to go with.?
WEll I use Httrack and it fetches html files as well. You are probably taking winhttrack project file as the only output file, but if you check inside the project directory there are html files (together with images, etc). I would suggest using - http://htmlparser.sourceforge.net/. It is a java library and since your project is a Java project it should be fairly easy to use it. You can also save the whole website locally using org.htmlparser.parserapplications.SiteCapturer (and specify whether resources such as images should be captured as well). Hope it helps.
all.
for the given page, say "http://www.yahoo.com", how can i calculate total size for the downloaded files, for example img files, javascript files, and css files?
I know the htmlparser jar, but this does not support element for css file.
As Graeme mentioned, both the Firebug add-on for Firefox (a great tool for web developers btw) and the developer tools in Chrome will give you the info you want.
However if you dont want to download anything you can use this online service:
http://www.websiteoptimization.com/services/analyze/
And this will tell you how much is downloaded in bytes for a webpage, including images, style sheets, scripts and everything else.
i am trying to create an android application that saves webpages to use it in offline-browsing, i was able to save the webpage but the problem was in the contents (images, javascripts,..etc), is there a way to do so programmatically, i use eclipse and test my work on an emulator.
hm, I am afraid you should parse html's yourself (I mean do that with a properly lib) and store all resources (css, js, images, videos etc.) too.
s. how it is done in a java crawler: open source crawlers
You will need to search for all images, javascript files, css files, etc... and download them, saving them to the same relative path to the HMTL files - Assuming the html is coded with relative paths (images/image.png) and not absolute paths (http://www.domain.com/image/image.png).
You can pretty easily search the html string for <img, <script, <link etc.. and parse from there - or you can find a 3rd party html parser
How to download the page and the images given the url using java. A similar question was avaialble already, which when tried just saves the page and not the images.
You'll have to download the page (HTML), then parse the HTML, find the <img> tags in there, and download the images (the src attribute of the <img> tag is the URL of the image file). Likewise you might want to download accompanying CSS or JavaScript files for the page.
Java makes it easy to copy files from a Web site