Java BufferedImages from .ts File - java

How can I get a BufferedImage (Frame at defined Position) from a .ts File in Java? I don't want to use any JNI / CLI Wrapper if possible.
System.out.println( JCodecUtil.detectFormat( file ) );
Demuxer demuxer = JCodecUtil.createDemuxer( JCodecUtil.detectFormat( file ), file );
for ( DemuxerTrack demuxerTrack : demuxer.getVideoTracks() ) {
Packet packet;
while ( ( packet = demuxerTrack.nextFrame() ) != null ) {
System.out.println( "frame " + packet.getDuration() );
}
}
The output of the first Snippet is just MPEG_TS
[ERROR] . (:0): Format MPEG_TS is not supported
MPEG_TS
[ERROR] . (:0): Format MPEG_TS is not supported
for ( DemuxerTrack demuxerTrack : JCodecUtil.createM2TSDemuxer( file, TrackType.VIDEO ).v1.getTracks() ) {
Packet packet;
while ( ( packet = demuxerTrack.nextFrame() ) != null ) {
System.out.println( "frame : " + ImageIO.read( new ByteArrayInputStream( packet.getData().array() ) ) );
}
}
In this Snippet, it just outputs null for each Frame. How to fix this?

Picture tmp = Picture.create(1920, 1088, ColorSpace.YUV420);
VideoDecoder vd = JCodecUtil.createVideoDecoder(JCodecUtil.detectDecoder(data.duplicate()), data.duplicate());
Picture pic = vd.decodeFrame(data, tmp.getData());
BufferedImage buf = AWTUtil.toBufferedImage(pic);

Related

Can not extract text via Apache Tika using Lucee

I would like to extract text from pdf, docx etc via Lucee 5+ (5.2.9), but unfortunately i get empty result set. I have used several Apache Tika versions (runnable jar with Java 1.8.0) that might fit to my specific Lucee and Java requirements, but the result set always remains empty.
exract.cfc
component {
public any function init() {
_setTikaJarPath( GetDirectoryFromPath( GetCurrentTemplatePath( ) ) & "tika-app-1.19.1.jar" );
return this;
}
private struct function doParse( required any fileContent, boolean includeMeta=true, boolean includeText=true ) {
var result = {};
var is = "";
var jarPath = _getTikaJarPath();
if ( IsBinary( arguments.fileContent ) ) {
is = CreateObject( "java", "java.io.ByteArrayInputStream" ).init( arguments.fileContent );
} else {
// TODO, support plain string input (i.e. html)
return {};
}
try {
var parser = CreateObject( "java", "org.apache.tika.parser.AutoDetectParser", jarPath );
var ch = CreateObject( "java", "org.apache.tika.sax.BodyContentHandler" , jarPath ).init(-1);
var md = CreateObject( "java", "org.apache.tika.metadata.Metadata" , jarPath ).init();
parser.parse( is, ch, md );
if ( arguments.includeMeta ) {
result.metadata = {};
for( var key in md.names() ) {
var mdval = md.get( key );
if ( !isNull( mdval ) ) {
result.metadata[ key ] = _removeNonUnicodeChars( mdval );
}
}
}
if ( arguments.includeText ) {
result.text = _removeNonUnicodeChars( ch.toString() );
}
} catch( any e ) {
result = { error = e };
}
return result;
}
public function read(required string filename) {
var result = {};
if(!fileExists(filename)) {
result.error = "#filename# does not exist.";
return result;
};
var f = createObject("java", "java.io.File").init(filename);
var fis = createObject("java","java.io.FileInputStream").init(f);
try {
result = doParse(fis);
} catch(any e) {
result.error = e;
}
fis.close();
return result;
}
private string function _removeNonUnicodeChars( required string potentiallyDirtyString ) {
return ReReplace( arguments.potentiallyDirtyString, "[^\x20-\x7E]", "", "all" );
}
// GETTERS AND SETTERS
private string function _getTikaJarPath() {
return _tikaJarPath;
}
private void function _setTikaJarPath( required string tikaJarPath ) {
_tikaJarPath = arguments.tikaJarPath;
}
}
and the code that i use to run it
<cfset takis = new exract()>
<cfset files = directoryList(expandPath("./sources"))>
<cfloop index="f" array="#files#">
<cfif not findNoCase(".DS_Store",f)>
<cfdump var="#takis.read(f)#" label="#f#">
</cfif>
</cfloop>
I think the problem is a class clash: The Lucee core engine already loads a version of Tika meaning the one you point to is ignored. But the loaded version doesn't behave as expected, returning empty strings as you've seen.
I've solved this by using OSGi to load the desired Tika version. This involves editing the Manifest of the tika-app jar to include basic OSGi metadata and then loading it via my osgiLoader
There is a pre-built Tika bundle available but I haven't been able to get it to work with Lucee.
Here's how to convert the latest tika-app jar to OSGi:
open the "tika-app-1.28.2.jar" with 7-zip
open META-INF then select MANIFEST.MF and press F4 to open it in a text editor
add the following to the end of the file:
Bundle-Name: Apache Tika App Bundle
Bundle-SymbolicName: apache-tika-app-bundle
Bundle-Description: Apache Tika App jar converted to an OSGi bundle
Bundle-ManifestVersion: 2
Bundle-Version: 1.28.2
Bundle-ClassPath: .,tika-app-1.28.2.jar
Save choosing to update when prompted.
You can then call the jar using osgiLoader as follows:
extractor.cfc
component{
property name="loader" type="object";
property name="tikaBundle" type="struct";
public extractor function init( required object loader, required struct tikaBundle ){
variables.loader = arguments.loader
variables.tikaBundle = arguments.tikaBundle
return this
}
public string function parseToString( required string filePath ){
try{
var fileStream = CreateObject( "java", "java.io.FileInputStream" ).init( JavaCast( "string", arguments.filePath ) )
var tikaObject = loader.loadClass( "org.apache.tika.Tika", tikaBundle.path, tikaBundle.name, tikaBundle.version )
var result = tikaObject.parseToString( fileStream )
}
finally{
fileStream.close()
}
return result
}
}
(The following script assumes extractor.cfc, the modified Tika jar, the osgiLoader.cfc and the document to be processed are in the same directory.)
index.cfm
<cfscript>
docPath = ExpandPath( "test.pdf" )
loader = New osgiLoader()
tikaBundle = {
version: "1.28.2"
,name: "apache-tika-app-bundle"
,path: ExpandPath( "tika-app-1.28.2.jar" )
}
extractor = New extractor( loader, tikaBundle )
result = extractor.parseToString( docPath )
dump( result )
</cfscript>
Another way to get the right version loaded is to use JavaLoader. For some reason I couldn't get it to work with the latest tika-app jar (1.28.2), but 1.19.1 does seem to work.
Hacking the existing extension
I would advise you to raise an issue with Preside to change their extension to avoid the clash, but as a temporary hack you could try amending it yourself as follows:
First, add your modified Tika bundle and the osgiLoader.cfc to the /preside-ext-tika/services/ directory.
Next, change line 14 of DocumentMetadataService.cfc so the name of the Tika jar path matches your modified bundle.
_setTikaJarPath( GetDirectoryFromPath( GetCurrentTemplatePath( ) ) & "tika-app-1.28.2.jar" );
Then, modify lines 33-35 of the same cfc to replace:
var parser = CreateObject( "java", "org.apache.tika.parser.AutoDetectParser", jarPath );
var ch = CreateObject( "java", "org.apache.tika.sax.BodyContentHandler" , jarPath ).init(-1);
var md = CreateObject( "java", "org.apache.tika.metadata.Metadata" , jarPath ).init();
with the following:
var loader = New osgiLoader();
var tikaBundle = { version: "1.28.2", name: "apache-tika-app-bundle" };
var parser = loader.loadClass( "org.apache.tika.parser.AutoDetectParser", jarPath, tikaBundle.name, tikaBundle.version )
var ch = loader.loadClass( "org.apache.tika.sax.BodyContentHandler" , jarPath, tikaBundle.name, tikaBundle.version ).init(-1)
var md = loader.loadClass( "org.apache.tika.metadata.Metadata" , jarPath, tikaBundle.name, tikaBundle.version ).init()
NB: I don't have Preside so can't test it in context.

Downloading large amount of data and storing on Android & iOS

So I have this API that downloads from our web service, but the we service sends it as a ZIP file instead of a json stream or something else.
Now the files can get quite large butt they are not saved as a ZIP file on the device but are instead unzipped and then saved in a realm database.
This seems like a extremely complicated way to do this and I would just like to remove the zip part and turn it into a Json streaming service instead.
Is that a valid way to do this or is there something else I should be doing?
The app for context is basically a Form viewer that is intended to have offline mode.
[WebMethod]
public string AndroidGetFormByID(string sessionID, int formID)
{
JObject json = new JObject();
UserDetails user = DBUserHelper.GetUserBySessionID(new Guid(sessionID));
if (user == null)
{
json["Error"] = "Not logged in";
return json.ToString(Newtonsoft.Json.Formatting.None);
}
Client client = Client.GetClient(user.ClientID);
var formTemplateRecord = SqlInsertUpdate.SelectQuery("SELECT JSON, CreatedDate FROM FormTemplates WHERE ID=#ID AND clientID=#clientID", "FormsConnectionString", new List<SqlParameter> { new SqlParameter("#ID", formID), new SqlParameter("#clientID", client.ID) }).GetFirstRow();
var formJson = formTemplateRecord["JSON"].ToString();
if (formJson == null)
{
json["Error"] = "No such form";
return json.ToString(Newtonsoft.Json.Formatting.None);
}
json = JObject.Parse(formJson);
json["formID"] = formID;
try
{
json["created"] = Convert.ToDateTime(formTemplateRecord["CreatedDate"]).ToString("dd/MM/yyyy");
}
catch (Exception e)
{
}
MemoryStream convertedFormData = new MemoryStream();
try
{
using (MemoryStream ms = new MemoryStream(json.ToString(Newtonsoft.Json.Formatting.None).ToByteArray()))
{
ms.Seek(0, SeekOrigin.Begin);
using (ZipFile zipedForm = new ZipFile())
{
zipedForm.AddEntry(json["title"].ToString() + "_" + json["formID"].ToString(), ms);
zipedForm.Save(convertedFormData);
}
}
}
catch (Exception ex)
{
return ex.Message.ToString();
}
return Convert.ToBase64String(convertedFormData.ToArray());
}
Also added a bit of java code for context how it is being used:
private void getForms ( WeakReference < Context > contextWeakReference, List < Integer > ids )
{
AtomicInteger atomicReference = new AtomicInteger( );
Observable.interval( 1, TimeUnit.SECONDS )
.map( aLong -> ids.get( aLong.intValue() ) )
.take( ids.size() )
.flatMap( integer ->
{
atomicReference.set( integer );
GetFormsListener.setCurrentItem( listOfIds.indexOf( integer ) + 1 );
FormDBHelper.updateTemplateDownloading( contextWeakReference, atomicReference.get( ), -1, FormIOHelper.FORM_STATUS.DOWNLOADING.toString() );
return ServiceGenerator.createService( ).androidGetFormByID( ClientUtils.loginDetailsConstructor.sessionID, String.valueOf( integer ) );}, 1 )
.map( base64 ->
{
final Context context = contextWeakReference.get();
if(context == null)
throw new NullPointerException( );
AppUtils.LogToConsole( Log.ASSERT, "Reached Here Before Write Form", AppUtils.getLoggedTime( ) );
final File file = FormIOHelper.checkFormFileExists( context.getFilesDir(), atomicReference.get(), "Library", FormIOHelper.FOLDERS.TEMPLATES.toString() );
FormIOHelper.writeForm( file, base64 );
AppUtils.LogToConsole( Log.ASSERT, "Reached Here After Write Form", AppUtils.getLoggedTime( ) );
return file;
} )
.map( file ->
{
JsonObject formObject = null;
try
{
JsonObject jsonObject = FormIOHelper.getFormFromZipFileAndStrip( file );
formObject = FormDBHelper.stripFormJson( contextWeakReference, jsonObject, -1 );
} catch ( Throwable e )
{
ErrorLog.log( e );
FormDBHelper.updateTemplateDownloading( contextWeakReference, atomicReference.get( ), -1, FormIOHelper.FORM_STATUS.ERROR.toString( ) );
}
if ( formObject == null )
return new JsonArray( );
JsonArray jsonElements;
if ( formObject.has( "embeddedFiles" ) && formObject.get( "embeddedFiles" ).isJsonArray( ) )
jsonElements = formObject.get( "embeddedFiles" ).getAsJsonArray( );
else
jsonElements = new JsonArray( );
if ( jsonElements.size( ) > 0 )
{
final List < DownloadableFilesConstructor > downloadableFilesConstructorList = FormIOHelper.setEmbeddedFiles( jsonElements );
Context context = contextWeakReference.get( );
if ( context == null )
return jsonElements;
DownloadableFilesDBHelper.saveData( context, downloadableFilesConstructorList );
}
return jsonElements;
} )
You can try out Google GSON Stream. It helps in downloading large amount of data through JSON REST API.

How do I re-encode dynamically compiled bytes to text?

Consider the following(Sourced primarily from here):
JavaCompiler compiler = ToolProvider.getSystemJavaCompiler( );
JavaFileManager manager = new MemoryFileManager( compiler.getStandardFileManager( null, null, null ) );
compiler.getTask( null, manager, null, null, null, sourceScripts ).call( ); //sourceScripts is of type List<ClassFile>
And the following file manager :
public class MemoryFileManager extends ForwardingJavaFileManager< JavaFileManager > {
private HashMap< String, ClassFile > classes = new HashMap<>( );
public MemoryFileManager( StandardJavaFileManager standardManager ) {
super( standardManager );
}
#Override
public ClassLoader getClassLoader( Location location ) {
return new SecureClassLoader( ) {
#Override
protected Class< ? > findClass( String className ) throws ClassNotFoundException {
if ( classes.containsKey( className ) ) {
byte[ ] classFile = classes.get( className ).getClassBytes( );
System.out.println(new String(classFile, "utf-8"));
return super.defineClass( className, classFile, 0, classFile.length );
} else throw new ClassNotFoundException( );
}
};
}
#Override
public ClassFile getJavaFileForOutput( Location location, String className, Kind kind, FileObject sibling ) {
if ( classes.containsKey( className ) ) return classes.get( className );
else {
ClassFile classObject = new ClassFile( className, kind );
classes.put( className, classObject );
return classObject;
}
}
}
public class ClassFile extends SimpleJavaFileObject {
private byte[ ] source;
protected final ByteArrayOutputStream compiled = new ByteArrayOutputStream( );
public ClassFile( String className, byte[ ] contentBytes ) {
super( URI.create( "string:///" + className.replace( '.', '/' ) + Kind.SOURCE.extension ), Kind.SOURCE );
source = contentBytes;
}
public ClassFile( String className, CharSequence contentCharSequence ) throws UnsupportedEncodingException {
super( URI.create( "string:///" + className.replace( '.', '/' ) + Kind.SOURCE.extension ), Kind.SOURCE );
source = ( ( String )contentCharSequence ).getBytes( "UTF-8" );
}
public ClassFile( String className, Kind kind ) {
super( URI.create( "string:///" + className.replace( '.', '/' ) + kind.extension ), kind );
}
public byte[ ] getClassBytes( ) {
return compiled.toByteArray( );
}
public byte[ ] getSourceBytes( ) {
return source;
}
#Override
public CharSequence getCharContent( boolean ignoreEncodingErrors ) throws UnsupportedEncodingException {
return new String( source, "UTF-8" );
}
#Override
public OutputStream openOutputStream( ) {
return compiled;
}
}
Stepping through the code, on the compiler.getTask().call(), the first thing that happens here is getJavaFileForOutput() is called, and then the getClassLoader() method is called to load the class, which yields in the compiled bytes being written to console.
Why does that println in the getClassLoader() method yield an amalgamation of my working compiled bytecode(primarily strings, it appears the actual bytecode instruction keywords are not here) and random gibberish? This leads me to believe that I was using too short a UTF so I tried UTF-16, and it looked more or less similar. How do I encode the bytes back into text? I am aware that using the SimpleJavaFileManager would be straightforward enough but I need to be able to use this example of caching(without the possible memory leaks of course) for performance purposes.
Edit:
And yes, the compiled code does classload and run perfectly.
Why does that println in the getClassLoader() method yield an amalgamation of my working compiled bytecode(primarily strings, it appears the actual bytecode instruction keywords are not here) and random gibberish?
Without seeing the so-called "random gibberish", I would surmise that what you are seeing is the well-formed binary content of a class file that has been "decoded" as a String in some character set.
That ain't going to work. It is a binary format, and you can't expect to turn it into text like that and have it display as something readable.
(And for what it is worth, a ".class" file would not contain keywords for the JVM opcodes, any more than a ".exe" file would contain keywords for machine instructions. It is binary!)
If you want to see the compiled code in text form, then save the bytes in that byte array to a file, and use the javap utility to look at it. (I'll leave you to look up the command line syntax for the javap command ... )

Display PDF in webpage

To follow up to this post I change the code to this:
#RequestMapping( value = "/{prePath:^tutor$|^admin$}/module/{file_id}" )
public void getModule( #PathVariable( "file_id" )
int fileId, Model model, HttpServletResponse response, HttpServletRequest request )
{
model.addAttribute( "id", fileId );
File test = new File( "C:\\resource\\pdf\\test.pdf" );
response.setHeader( "Content-Type", "application/pdf" );
response.setHeader( "Content-Length", String.valueOf( test.length() ) );
response.setHeader( "Content-Disposition", "inline; filename=\"test.pdf\"" );
System.out.println( test.toPath() );
try
{
Files.copy( test.toPath(), response.getOutputStream() );
}
catch( IOException e )
{
e.printStackTrace();
}
}
And finally able to display PDF in the webpage. The URL is accessed by:
<a href="../admin/module/${ file_id }.do?test" >Spring Tutorial</a>
But the PDF file is displaying on the whole page. My PDF is from my local I want to display it with just a portion of webpage. Maybe a <div> or anything that suit the approach best. Any thoughts of how can I do this?
Just iframe it if the user has adobe pdf then they will be able.to see in in there browser
<?php
$file = "file.pdf";
$read = file_get_contents($file);
echo $read;
?>

Issue with org.apache.commons.net.ftp.FTPClient listFiles()

The listFiles() method of org.apache.commons.net.ftp.FTPClient works fine with Filezilla server on 127.0.0.1 but returns null on the root directory of public FTP servers such as belnet.be.
There is an identical question on the link below but enterRemotePassiveMode() doesn't seem to help.
Apache Commons FTPClient.listFiles
Could it be an issue with list parsing? If so, how can go about solving this?
Edit: Here's a directory cache dump:
FileZilla Directory Cache Dump
Dumping 1 cached directories
Entry 1:
Path: /
Server: anonymous#ftp.belnet.be:21, type: 4096
Directory contains 7 items:
lrw-r--r-- ftp ftp D 28 2009-06-17 debian
lrw-r--r-- ftp ftp D 31 2009-06-17 debian-cd
-rw-r--r-- ftp ftp 0 2010-03-04 13:30 keepalive.txt
drwxr-xr-x ftp ftp D 4096 2010-02-18 14:22 mirror
lrw-r--r-- ftp ftp D 6 2009-06-17 mirrors
drwxr-xr-x ftp ftp D 4096 2009-06-23 packages
lrw-r--r-- ftp ftp D 1 2009-06-17 pub
Here's my code using a wrapper I've made (testing inside the wrapper produces the same results):
public static void main(String[] args) {
FTPUtils ftpUtils = new FTPUtils();
String ftpURL = "ftp.belnet.be";
Connection connection = ftpUtils.getFTPClientManager().getConnection( ftpURL );
if( connection == null ){
System.out.println( "Could not connect" );
return;
}
FTPClientManager manager = connection.getFptClientManager();
FTPClient client = manager.getClient();
try {
client.enterRemotePassiveMode();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
if( connection != null ){
System.out.println( "Connected to FTP" );
connection.login("Anonymous", "Anonymous");
if( connection.isLoggedIn() ){
System.out.println( "Login successful" );
LoggedInManager loggedin = connection.getLoggedInManager();
System.out.println( loggedin );
String[] fileList = loggedin.getFileList();
System.out.println( loggedin.getWorkingDirectory() );
if( fileList == null || fileList.length == 0 )
System.out.println( "No files found" );
else{
for (String name : fileList ) {
System.out.println( name );
}
}
connection.disconnect();
if( connection.isDisconnected() )
System.out.println( "Disconnection successful" );
else
System.out.println( "Error disconnecting" );
}else{
System.out.println( "Unable to login" );
}
} else {
System.out.println( "Could not connect" );
}
}
Produces this output:
Connected to FTP
Login succesful
utils.ftp.FTPClientManager$Connection$LoggedInManager#156ee8e
null
No files found
Disconnection successful
Inside the wrapper (attempted using both listNames() and listFiles() ):
public String[] getFileList() {
String[] fileList = null;
FTPFile[] ftpFiles = null;
try {
ftpFiles = client.listFiles();
//fileList = client.listNames();
//System.out.println( client.listNames() );
} catch (IOException e) {
return null;
}
fileList = new String[ ftpFiles.length ];
for( int i = 0; i < ftpFiles.length; i++ ){
fileList[ i ] = ftpFiles[ i ].getName();
}
return fileList;
}
As for FTPClient, it is handled as follows:
public class FTPUtils {
private FTPClientManager clientManager;
public FTPClientManager getFTPClientManager(){
clientManager = new FTPClientManager();
clientManager.setClient( new FTPClient() );
return clientManager;
}
Each FTP server has a different file list layout (yes, it's not part of the FTP standard, it's dumb), and so you have to use the correct FTPFileEntryParser, either by specifying it manually, or allowing CommonsFTP to auto-detect it.
Auto-detection usually works fine, but sometimes it doesn't, and you have to specify it explicitly, e.g.
FTPClientConfig conf = new FTPClientConfig(FTPClientConfig.SYST_UNIX);
FTPClient client = FTPClient();
client.configure(conf);
This explicitly sets the expected FTP server type to UNIX. Try the various types, see how it goes. I tried finding out myself, but ftp.belnet.be is refusing my connections :(
Have you tried checking that you can list the files using normal FTP client? (For some reason, I cannot even connect to the FTP port of "belnet.be".)
EDIT
According to the javadoc for listFiles(), the parsing is done using the FTPFileEntryParser instance provided by the parser factory. You probably need to figure out which of the parsers matches the FTP server's LIST output and configure the factory accordingly.
There was a parsing issue in earlier version of apache-Commons-net , the SYST command which returns the server type when returns null ( abruptly ) was not handled in parsingException . Try using the latest jar of apache-commons-net it may solve your problem.

Categories

Resources